Get the FREE Ultimate OpenClaw Setup Guide →

testing-process

npx machina-cli add skill serpro69/claude-starter-kit/testing-process --openclaw
Files (1)
SKILL.md
1.2 KB

Testing & Quality Assurance Process

Guidelines

  1. Always try to add tests for any new functionality, and make sure to cover all cases and code branches, according to requirements.
  2. Always try to add tests for any bug-fixes, if the discovered bug is not already covered by tests. If the bug was already covered by tests, fix the existing tests as needed.
  3. Always run all existing tests after you are done with a given implementation or bug-fix.

Use the following guidelines when working with tests:

  • Ensure comprehensive testing
  • Use table-/data-driven tests and test generation
  • Benchmark tests and performance regression detection
  • Integration testing with test containers
  • Mock generation with %LANGUAGE% best practices and well-establised %LANGUAGE% mocking tools
  • Property-based testing with %LANGUAGE% best practices and well-establised %LANGUAGE% testing tools
  • Propose end-to-end testing strategies if automated e2e testing is not feasible
  • Code coverage analysis and reporting

Source

git clone https://github.com/serpro69/claude-starter-kit/blob/master/.github/templates/claude/skills/testing-process/SKILL.mdView on GitHub

Overview

This skill codifies when and how to test code changes, from new features to bug fixes. It advocates adding tests for new functionality, ensuring bug fixes are covered, and always running the full test suite to catch regressions. Following these guidelines improves reliability and confidence in code changes.

How This Skill Works

Add tests for new functionality and, if a bug is fixed and not already covered, add tests or fix existing ones. After implementing changes, run all tests, leverage table-/data-driven tests and test generation, perform benchmarks for performance, use integration testing with test containers, and apply language-appropriate mocking and property-based testing. When automated end-to-end testing isn’t feasible, propose E2E strategies to ensure end-user workflows are covered.

When to Use It

  • When implementing a new feature, write tests that cover all required cases and code branches according to the requirements.
  • When fixing a bug, add tests if the bug isn’t already covered; if it is, adjust existing tests as needed.
  • After completing a feature or bug fix, run the entire test suite to detect regressions.
  • When performance or regression concerns exist, include benchmarks and monitor for changes in performance.
  • If automated end-to-end testing isn’t feasible, propose end-to-end testing strategies to ensure real-world workflows are validated.

Quick Start

  1. Step 1: Identify the change (new feature or bug fix) and the required test coverage.
  2. Step 2: Write tests for all cases and code paths; add or update tests for the bug as needed.
  3. Step 3: Run the full test suite, review coverage, and iterate until green.

Best Practices

  • Ensure comprehensive testing across all cases and code branches.
  • Use table-/data-driven tests and test generation to cover multiple inputs efficiently.
  • Benchmark tests and performance regression detection to guard against slow changes.
  • Perform integration testing with test containers and robust mocking strategies.
  • Incorporate property-based testing with language-specific best practices and tooling.

Example Use Cases

  • Adding a new feature and writing tests that cover all possible input combinations and edge cases.
  • Fixing a bug and adding a regression test if the bug wasn’t previously covered.
  • Running the full test suite after changes to catch regressions early.
  • Using data-driven tests to validate behavior across a wide range of inputs.
  • Proposing an end-to-end testing plan when automated E2E is not currently feasible.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers