Get the FREE Ultimate OpenClaw Setup Guide →

running-tests

Scanned
npx machina-cli add skill aiskillstore/marketplace/running-tests --openclaw
Files (1)
SKILL.md
1.1 KB

Test Command Detection

  1. Check CLAUDE.md for project-specific test command
  2. Auto-detect if not specified:
FileCommand
package.jsonnpm test
Cargo.tomlcargo test
justfilejust test
Makefilemake test
pyproject.tomlpytest
go.modgo test ./...
  1. Ask user if not found

Test Execution

Run the detected command and report:

  • Pass/Fail status
  • Failed test names (if any)
  • Error messages (if any)

Failure Handling

  1. Analyze failure cause
  2. Determine root cause:
CauseAction
Implementation bugFix and commit
Test bugFix test and commit
Environment issueReport to manager
  1. Re-run tests after fix
  2. Confirm all tests pass

Completion Report

  • Test result (pass/fail)
  • Test count
  • Fixes applied (if any)
  • Additional commits (if any)

Source

git clone https://github.com/aiskillstore/marketplace/blob/main/skills/1gy/running-tests/SKILL.mdView on GitHub

Overview

The running-tests skill orchestrates test runs across languages and build systems by auto-detecting the appropriate test command, executing it, and reporting a clear pass/fail status. When failures occur, it analyzes root causes (implementation bugs, test bugs, or environment issues), applies fixes or flags actions, re-runs tests, and delivers a structured completion report that includes test counts and any commits.

How This Skill Works

You first search for a project-specific test command via CLAUDE.md. If none is found, you auto-detect from common manifest files (package.json → npm test, Cargo.toml → cargo test, justfile → just test, Makefile → make test, pyproject.toml → pytest, go.mod → go test ./...). The skill runs the detected command using Bash, captures pass/fail status, and uses Grep/Glob to extract failing test names and error messages. If failures occur, it analyzes the cause to decide the action (implementation bug → fix and commit; test bug → fix test and commit; environment issue → report to manager), re-runs the tests after applying fixes, and compiles a completion report summarizing results and any commits or fixes.

When to Use It

  • When you need to verify tests in CI or local verification and want a reliable pass/fail result.
  • When your project lacks a documented test command and you rely on auto-detection from standard build files.
  • When tests fail after a change and you require root-cause analysis (implementation bug, test bug, or environment issue).
  • When preparing a verification step after fixes to confirm all tests pass before merging or releasing.
  • When you want a formal completion report including test counts and fixes applied for auditing or CI logs.

Quick Start

  1. Scan the repository for a CLAUDE.md note to obtain the test command.
  2. If absent, auto-detect from standard manifest files: package.json (npm test), Cargo.toml (cargo test), justfile (just test), Makefile (make test), pyproject.toml (pytest), go.mod (go test ./...).
  3. Run the detected test command and review pass/fail status, failing test names, and error messages.
  4. If failures occur, let the skill perform failure analysis, re-run after fixes, and review the final completion report.

Best Practices

  • Prefer explicit, well-documented test commands to improve auto-detection accuracy.
  • Capture verbose test output to aid in identifying failing tests and error messages.
  • Upon failure, quickly classify root cause (implementation, test, or environment) to guide fixes.
  • Always re-run the full test suite after applying any fix before issuing the completion report.
  • Document environment assumptions (versions, dependencies) to reduce flakiness and improve reproducibility.

Example Use Cases

  • A Node.js project with package.json uses npm test; running-tests detects it, executes npm test, and reports results plus any failing tests and errors.
  • A Rust workspace with Cargo.toml runs cargo test; failures trigger root-cause analysis and a subsequent re-run after fixes.
  • A Python project using pyproject.toml triggers pytest; failing tests are named and error messages are captured for quick triage and fix commits.
  • A Go project with go.mod runs go test ./...; environment issues are flagged to the manager, then retried after adjustments.
  • A Make-based project with Makefile uses make test; the tool aggregates results and produces a final completion report detailing fixes and commits.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers