Get the FREE Ultimate OpenClaw Setup Guide →
a

Tdd Guide

Scanned

@alirezarezvani

npx machina-cli add skill @alirezarezvani/tdd-guide --openclaw
Files (1)
SKILL.md
4.2 KB

TDD Guide

Test-driven development skill for generating tests, analyzing coverage, and guiding red-green-refactor workflows across Jest, Pytest, JUnit, and Vitest.

Table of Contents


Capabilities

CapabilityDescription
Test GenerationConvert requirements or code into test cases with proper structure
Coverage AnalysisParse LCOV/JSON/XML reports, identify gaps, prioritize fixes
TDD WorkflowGuide red-green-refactor cycles with validation
Framework AdaptersGenerate tests for Jest, Pytest, JUnit, Vitest, Mocha
Quality ScoringAssess test isolation, assertions, naming, detect test smells
Fixture GenerationCreate realistic test data, mocks, and factories

Workflows

Generate Tests from Code

  1. Provide source code (TypeScript, JavaScript, Python, Java)
  2. Specify target framework (Jest, Pytest, JUnit, Vitest)
  3. Run test_generator.py with requirements
  4. Review generated test stubs
  5. Validation: Tests compile and cover happy path, error cases, edge cases

Analyze Coverage Gaps

  1. Generate coverage report from test runner (npm test -- --coverage)
  2. Run coverage_analyzer.py on LCOV/JSON/XML report
  3. Review prioritized gaps (P0/P1/P2)
  4. Generate missing tests for uncovered paths
  5. Validation: Coverage meets target threshold (typically 80%+)

TDD New Feature

  1. Write failing test first (RED)
  2. Run tdd_workflow.py --phase red to validate
  3. Implement minimal code to pass (GREEN)
  4. Run tdd_workflow.py --phase green to validate
  5. Refactor while keeping tests green (REFACTOR)
  6. Validation: All tests pass after each cycle

Tools

ToolPurposeUsage
test_generator.pyGenerate test cases from code/requirementspython scripts/test_generator.py --input source.py --framework pytest
coverage_analyzer.pyParse and analyze coverage reportspython scripts/coverage_analyzer.py --report lcov.info --threshold 80
tdd_workflow.pyGuide red-green-refactor cyclespython scripts/tdd_workflow.py --phase red --test test_auth.py
framework_adapter.pyConvert tests between frameworkspython scripts/framework_adapter.py --from jest --to pytest
fixture_generator.pyGenerate test data and mockspython scripts/fixture_generator.py --entity User --count 5
metrics_calculator.pyCalculate test quality metricspython scripts/metrics_calculator.py --tests tests/
format_detector.pyDetect language and frameworkpython scripts/format_detector.py --file source.ts
output_formatter.pyFormat output for CLI/desktop/CIpython scripts/output_formatter.py --format markdown

Input Requirements

For Test Generation:

  • Source code (file path or pasted content)
  • Target framework (Jest, Pytest, JUnit, Vitest)
  • Coverage scope (unit, integration, edge cases)

For Coverage Analysis:

  • Coverage report file (LCOV, JSON, or XML format)
  • Optional: Source code for context
  • Optional: Target threshold percentage

For TDD Workflow:

  • Feature requirements or user story
  • Current phase (RED, GREEN, REFACTOR)
  • Test code and implementation status

Limitations

ScopeDetails
Unit test focusIntegration and E2E tests require different patterns
Static analysisCannot execute tests or measure runtime behavior
Language supportBest for TypeScript, JavaScript, Python, Java
Report formatsLCOV, JSON, XML only; other formats need conversion
Generated testsProvide scaffolding; require human review for complex logic

When to use other tools:

  • E2E testing: Playwright, Cypress, Selenium
  • Performance testing: k6, JMeter, Locust
  • Security testing: OWASP ZAP, Burp Suite

Source

git clone https://clawhub.ai/alirezarezvani/tdd-guideView on GitHub

Overview

Guides test-driven development across Jest, Pytest, JUnit, and Vitest by generating tests, analyzing coverage, and steering red-green-refactor cycles. Includes fixture generation, quality scoring, and framework adapters to streamline TDD workflows.

How This Skill Works

The skill coordinates several scripts to cover the TDD lifecycle: generate tests from code with test_generator.py, analyze coverage with coverage_analyzer.py, and guide red-green-refactor cycles via tdd_workflow.py. Framework adapters and fixture generators help convert tests between frameworks and produce realistic data without runtime execution.

When to Use It

  • When you have requirements or source code and need automatic test generation for a specific framework
  • When you need to analyze a coverage report (LCOV/JSON/XML) and prioritize gaps
  • When adopting a TDD workflow to add a new feature using red-green-refactor cycles
  • When migrating or converting tests between Jest, Pytest, JUnit, Vitest, or Mocha
  • When you want to assess and improve test quality with metrics like isolation, assertions, and naming

Quick Start

  1. Step 1: Provide source code and target framework, then run python scripts/test_generator.py --input <source> --framework <jest|pytest|junit|vitest>
  2. Step 2: Generate and review coverage with python scripts/coverage_analyzer.py --report <lcov.info> --threshold 80
  3. Step 3: Run the TDD cycle with python scripts/tdd_workflow.py --phase red --test <test_name>

Best Practices

  • Start each feature by writing a failing test (RED) before implementing code
  • Run coverage analysis early and target uncovered paths
  • Keep tests isolated, deterministic, and easy to read
  • Refactor tests and code together while keeping all tests green
  • Use fixture_generator to craft realistic test data and mocks

Example Use Cases

  • Generate Jest tests from a TypeScript service class based on requirements
  • Analyze a Pytest project’s coverage report and fill LCOV gaps
  • Drive a RED-GREEN-REFACTOR cycle for a new feature in Java with JUnit
  • Convert tests from Jest to Pytest using framework_adapter.py
  • Create mocked data with fixture_generator for a User entity in tests

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers