ln-782-test-runner
npx machina-cli add skill levnikolaevich/claude-code-skills/ln-782-test-runner --openclawPaths: File paths (
shared/,references/,../ln-*) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root.
ln-782-test-runner
Type: L3 Worker Category: 7XX Project Bootstrap Parent: ln-780-bootstrap-verifier
Purpose
Detects test frameworks, executes all test suites, and reports results including pass/fail counts and optional coverage.
Scope:
- Auto-detect test frameworks from project configuration
- Execute test suites for all detected frameworks
- Parse test output for pass/fail counts
- Generate coverage reports when enabled
Out of Scope:
- Building projects (handled by ln-781)
- Container operations (handled by ln-783)
- Writing or fixing tests
When to Use
| Scenario | Use This Skill |
|---|---|
| Called by ln-780 orchestrator | Yes |
| Standalone test execution | Yes |
| CI/CD pipeline test step | Yes |
| Build verification needed | No, use ln-781 |
Workflow
Step 1: Detect Test Frameworks
Identify test frameworks from project configuration files.
| Marker | Test Framework | Project Type |
|---|---|---|
| vitest.config.* | Vitest | Node.js |
| jest.config.* | Jest | Node.js |
| *.test.ts in package.json | Vitest/Jest | Node.js |
| xunit / nunit in *.csproj | xUnit/NUnit | .NET |
| pytest.ini / conftest.py | pytest | Python |
| *_test.go files | go test | Go |
| tests/ with Cargo.toml | cargo test | Rust |
Step 2: Execute Test Suites
Run tests for each detected framework.
| Framework | Execution Strategy |
|---|---|
| Vitest | Run in single-run mode with JSON reporter |
| Jest | Run with JSON output |
| xUnit/NUnit | Run with logger for structured output |
| pytest | Run with JSON plugin or verbose output |
| go test | Run with JSON output flag |
| cargo test | Run with standard output parsing |
Step 3: Parse Results
Extract test results from framework output.
| Metric | Description |
|---|---|
| total | Total number of tests discovered |
| passed | Tests that completed successfully |
| failed | Tests that failed assertions |
| skipped | Tests marked as skip/ignore |
| duration | Total execution time |
Step 4: Generate Coverage (Optional)
When coverage enabled, collect coverage metrics.
| Framework | Coverage Tool |
|---|---|
| Vitest/Jest | c8 / istanbul |
| .NET | coverlet |
| pytest | pytest-cov |
| Go | go test -cover |
| Rust | cargo-tarpaulin |
Coverage Metrics:
| Metric | Description |
|---|---|
| linesCovered | Lines executed during tests |
| linesTotal | Total lines in codebase |
| percentage | Coverage percentage |
Step 5: Report Results
Return structured results to orchestrator.
Result Structure:
| Field | Description |
|---|---|
| suiteName | Test suite identifier |
| framework | Detected test framework |
| status | passed / failed / error |
| total | Total test count |
| passed | Passed test count |
| failed | Failed test count |
| skipped | Skipped test count |
| duration | Execution time in seconds |
| failures | Array of failure details (test name, message) |
| coverage | Coverage metrics (if enabled) |
Error Handling
| Error Type | Action |
|---|---|
| No tests found | Report warning, status = passed (0 tests) |
| Test timeout | Report timeout, include partial results |
| Framework error | Log error, report as error status |
| Missing dependencies | Report missing test dependencies |
Options
| Option | Default | Description |
|---|---|---|
| skipTests | false | Skip execution if no tests found |
| allowFailures | false | Report success even if tests fail |
| coverage | false | Generate coverage report |
| timeout | 300 | Max execution time in seconds |
| parallel | true | Run test suites in parallel when possible |
Critical Rules
- Run all detected test suites - do not skip suites silently
- Parse actual results - do not rely only on exit code
- Include failure details - provide actionable information for debugging
- Respect timeout - prevent hanging on infinite loops
Definition of Done
- All test frameworks detected
- All test suites executed
- Results parsed and structured
- Coverage collected (if enabled)
- Results returned to orchestrator
Reference Files
- Parent:
../ln-780-bootstrap-verifier/SKILL.md
Version: 2.0.0 Last Updated: 2026-01-10
Source
git clone https://github.com/levnikolaevich/claude-code-skills/blob/master/ln-782-test-runner/SKILL.mdView on GitHub Overview
ln-782-test-runner automatically detects test frameworks in a project, executes all test suites, and reports pass/fail counts with optional coverage data. It supports multiple ecosystems including Node.js, .NET, Python, Go, and Rust, and returns structured results for the orchestrator.
How This Skill Works
Auto detects frameworks from project configuration files, runs the appropriate test commands for each detected framework, and parses the test output to extract total, passed, failed, skipped, and duration. When coverage is enabled, it collects coverage metrics using the framework specific toolchain such as c8/istanbul for JavaScript, coverlet for .NET, pytest-cov for Python, go test -cover for Go, or cargo-tarpaulin for Rust.
When to Use It
- Called by ln-780 orchestrator
- Standalone test execution
- CI/CD pipeline test step
- Build verification not needed; use ln-781
Quick Start
- Step 1: Detect test frameworks from project configuration files
- Step 2: Execute test suites for each detected framework
- Step 3: Generate and return structured results including coverage if enabled
Best Practices
- Auto detect frameworks from configuration to avoid manual mapping
- Run tests with framework appropriate output flags (JSON or structured logs)
- Parse results to extract total, passed, failed, skipped, and duration
- Enable coverage only when requested and use the recommended toolchain per framework
- Gracefully handle missing tests and dependency issues with clear statuses
Example Use Cases
- A Node.js monorepo with vitest.config.* and jest.config.* detected and both test suites run
- .NET project using xUnit or NUnit in csproj detected and tested with a structured logger
- Python project with pytest.ini or conftest.py runs pytest with JSON or verbose output
- Go project with *_test.go files runs go test with a JSON output flag
- Rust project with Cargo.toml tests runs cargo test and parses standard output