Get the FREE Ultimate OpenClaw Setup Guide →

testing

Scanned
npx machina-cli add skill dewitt4/claude-code-template/testing --openclaw
Files (1)
SKILL.md
3.4 KB

Testing Skill

When working on testing tasks, follow these guidelines:

Test Generation Approach

1. Understand the Code

  • Analyze the function/module to be tested
  • Identify inputs, outputs, and side effects
  • Note dependencies and external interactions
  • Understand the business logic and requirements

2. Test Categories

Unit Tests

  • Test individual functions/methods in isolation
  • Mock external dependencies
  • Focus on single responsibility
  • Fast execution

Integration Tests

  • Test component interactions
  • Use real or test doubles for dependencies
  • Verify data flow between modules
  • Test API endpoints with actual calls

End-to-End Tests

  • Test complete user workflows
  • Use real or staging environment
  • Validate from user perspective
  • Cover critical business paths

3. Test Coverage

Ensure tests cover:

  • Happy Path: Normal, expected inputs and flows
  • Edge Cases: Boundary values, empty inputs, maximum values
  • Error Cases: Invalid inputs, missing data, exceptions
  • State Changes: Before/after state verification
  • Side Effects: Database changes, API calls, file operations

4. Test Structure (AAA Pattern)

// Arrange: Set up test data and conditions
// Act: Execute the code under test
// Assert: Verify the results

5. Best Practices

  • Clear Names: Test names should describe what's being tested and expected behavior
    • Format: test_[method]_[scenario]_[expectedResult]
    • Example: test_calculateTotal_withDiscount_returnsReducedAmount
  • One Assert Per Test: Each test should verify one behavior
  • Independent Tests: Tests should not depend on each other
  • Fast Tests: Keep unit tests fast (< 100ms)
  • Reliable Tests: No flaky tests, no random data without seeds
  • Maintainable: Easy to understand and update

6. Testing Frameworks

Adapt to the project's testing framework:

  • JavaScript/TypeScript: Jest, Mocha, Vitest, Cypress
  • Python: pytest, unittest, nose2
  • Java: JUnit, TestNG, Mockito
  • C#: NUnit, xUnit, MSTest
  • .NET: xUnit, NUnit
  • Go: testing package, testify
  • Ruby: RSpec, Minitest

7. Mocking Strategy

  • Mock external dependencies (APIs, databases, file system)
  • Use test doubles appropriately:
    • Mocks: Verify interactions
    • Stubs: Provide predetermined responses
    • Fakes: Simplified working implementations
    • Spies: Record information about calls

8. Output Format

When generating tests, provide:

  1. Test file location and name
  2. Necessary imports and setup
  3. Complete test cases with:
    • Descriptive names
    • Arrange/Act/Assert sections
    • Comments explaining complex scenarios
  4. Coverage summary: What's tested, what's not
  5. Running instructions: How to execute tests

Code Coverage Goals

  • Minimum: 70% overall coverage
  • Target: 80%+ for critical business logic
  • Focus: Quality over quantity - meaningful tests, not just coverage numbers

When Improving Existing Tests

  1. Analyze current coverage: Identify gaps
  2. Review existing tests: Check for anti-patterns
  3. Prioritize: Start with critical paths and low-coverage areas
  4. Refactor: Make tests more maintainable
  5. Document: Add comments for complex test scenarios

Source

git clone https://github.com/dewitt4/claude-code-template/blob/main/.claude/skills/testing/SKILL.mdView on GitHub

Overview

Generates test cases, improves coverage, and crafts testing strategies for unit, integration, and e2e tests. It guides you through understanding code, selecting test categories, and applying the AAA test structure to ensure reliable quality.

How This Skill Works

Start by understanding the code under test: inputs, outputs, side effects, and dependencies. Then categorize tests into unit, integration, and end-to-end, applying the AAA pattern (Arrange, Act, Assert) and aligning with project frameworks and mocking strategies. Finally, assess coverage, adjust test scope, and document running instructions and outcomes.

When to Use It

  • When adding a new feature, to create unit and integration tests that cover happy paths and edge cases.
  • Before releasing, to validate critical user flows with end-to-end tests in staging.
  • During refactoring of a core module, to ensure behavior remains intact via regression tests.
  • When debugging flaky tests, to introduce deterministic mocks and improve test reliability.
  • When evaluating project quality, to drive coverage goals and identify gaps in critical business logic.

Quick Start

  1. Step 1: Identify the function or component to test and define expected outcomes.
  2. Step 2: Choose the test category (unit, integration, or e2e) and sketch AAA skeleton.
  3. Step 3: Implement tests, run, review coverage, and iterate.

Best Practices

  • Clear test names describe behavior and expectations.
  • Format: test_[method]_[scenario]_[expectedResult].
  • One Assert Per Test.
  • Independent tests that do not share state.
  • Fast and reliable tests; use seeds for randomness and avoid flakiness.

Example Use Cases

  • Unit test for a utility function calculating totals with discounts.
  • Integration test validating data flow between services and API responses.
  • End-to-end test simulating a user signup from start to finish in a staging env.
  • Mocking an external API with stubs to keep unit tests fast.
  • Coverage report driving targeted improvements in a critical module.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers