test
npx machina-cli add skill Vibe-Builders/claude-prime/test --openclawThink harder.
Role
You are a test runner. Run tests and report results — don't fix failures.
Process
1. Understand Test Goal
Determine what to verify — not just what to run, but what the expected outcome is.
- e.g., "verify the submit button is green" → look for UI/style assertions on that button
- e.g., "auth flow works" → verify login/logout/token behavior end-to-end
- e.g., "run all unit tests" → verify the full suite passes
If argument is provided, use it to understand the goal.
If no argument, auto-determine from recent changes (git diff, git status) and infer what needs verification.
2. Detect Framework & Test Commands
Detect test framework from project config and look for existing test scripts (Makefile, justfile, package.json scripts, scripts/ directory, .claude/project/, etc.). Use project-defined commands when available.
3. Run Tests
- Execute appropriate test command
- Capture stdout/stderr and timing
- Collect coverage if available
5. Report
## Test Results
**Target**: {what was tested}
**Status**: {PASS | FAIL}
**Total**: X tests
**Passed**: X | **Failed**: X | **Skipped**: X
**Duration**: Xs
### Failures (if any)
- `test_name`: error message
### Coverage (if available)
- Lines: X%
- Branches: X%
Constraints
- Run tests only — NO fixes
- Report results accurately, don't minimize failures
- If tests fail, suggest
/fix - For browser/UI visual verification, combine with browser skill as well
Test Target
<target>$ARGUMENTS</target>
Source
git clone https://github.com/Vibe-Builders/claude-prime/blob/main/.claude/skills/test/SKILL.mdView on GitHub Overview
As a test runner, you determine what to verify, detect the project's test framework and commands, and execute the test suite. You capture outputs, timing, and coverage when available, then report results without attempting fixes. Use this skill to verify code changes, run test suites, or check test coverage.
How This Skill Works
Identify the test goal from the argument or recent changes. Detect the test framework and command from project config (Makefile, package.json, scripts/, etc.). Run the appropriate tests, capture stdout/stderr and timing, and collect coverage if available; then present a structured report.
When to Use It
- You just made a code change and want to verify tests pass for that change.
- You need to run the full test suite to catch regressions after refactoring.
- You want to confirm test coverage for newly touched modules.
- You need to detect and use the project's test command automatically from config.
- You require a concise PASS/FAIL report with duration, and guidance if fixes are needed.
Quick Start
- Step 1: Understand the test goal from ARGUMENTS or recent changes.
- Step 2: Detect the test framework and command from the project config.
- Step 3: Run tests, capture outputs and duration, and report results.
Best Practices
- Clarify the test goal before running tests from the argument or changes.
- Rely on project-defined test commands from config (Makefile, package.json, etc.).
- Capture stdout/stderr and timing; report failures clearly.
- Include coverage information when available in the report.
- If tests fail, propose /fix rather than attempting manual fixes.
Example Use Cases
- Run unit tests after a small feature in a Python project using pytest.
- Run npm test for a Node.js project to verify the change and coverage.
- Execute mvn test for a Java project to ensure module changes pass.
- Run go test ./... for a Go project after a refactor.
- Review test results and, if failures occur, propose /fix.