Get the FREE Ultimate OpenClaw Setup Guide →

fix

Scanned
npx machina-cli add skill Borda/.home/fix --openclaw
Files (1)
SKILL.md
5.1 KB
<objective>

Fix software bugs with a disciplined reproduce-first workflow. Before touching any code, understand the root cause and capture the bug in a regression test. Then apply the minimal fix, verify all tests pass, and finish with linting and quality checks. The regression test stays in the codebase to prevent re-introduction.

</objective> <inputs>
  • $ARGUMENTS: required — one of:
    • A bug description in plain text (e.g., "TypeError when passing None to transform()")
    • A GitHub issue number (e.g., 123 — fetched via gh issue view)
    • An error message or traceback snippet
    • A failing test name (e.g., tests/test_transforms.py::test_none_input)
</inputs> <workflow>

Step 1: Understand the problem

Gather all available context about the bug:

# If issue number: fetch the full issue with comments
gh issue view <number> --comments

If an error message or pattern was provided: use the Grep tool (pattern <error_pattern>, path .) to search the codebase for the failing code path. Adjust to src/, lib/, or app/ as appropriate for the project layout.


# If failing test: run it to capture the exact failure
python -m pytest <test_path> -v --tb=long 2>&1 | tail -40

Spawn a sw-engineer agent to analyze the failing code path and identify:

  • The root cause (not just the symptom)
  • The minimal code surface that needs to change
  • Any related code that might be affected by the fix

Step 2: Reproduce the bug

Create or identify a test that demonstrates the failure:

# If a failing test already exists — run it to confirm it fails
python -m pytest <test_file>::<test_name> -v --tb=short

# If no test exists — write a regression test that captures the bug
# Name it: test_<function>_<bug_description> (e.g., test_transform_none_input)

Spawn a qa-specialist agent to write the regression test if one doesn't exist:

  • The test must fail against the current code (proving the bug exists)
  • Use pytest.mark.parametrize if the bug affects multiple input patterns
  • Keep the test minimal — exercise exactly the broken behavior
  • Add a brief comment linking to the issue if applicable (e.g., # Regression test for #123)

Gate: the regression test must fail before proceeding. If it passes, the bug isn't properly captured — revisit Step 1.

Step 3: Apply the fix

Make the minimal change to fix the root cause:

  1. Edit only the code necessary to resolve the bug
  2. Run the regression test to confirm it now passes:
    python -m pytest <test_file>::<test_name> -v --tb=short
    
  3. Run the full test suite for the affected module to check for regressions:
    # Step 3: regression gate — confirms fix does not break existing tests
    python -m pytest <test_dir> -v --tb=short
    
  4. If any existing tests break: the fix has side effects — reconsider the approach

Step 4: Linting and quality

Spawn a linting-expert agent (or run directly) to ensure the fix meets code quality standards:

# Run ruff for linting and formatting
ruff check <changed_files> --fix
ruff format <changed_files>

# Run mypy for type checking if configured
mypy <changed_files> --no-error-summary 2>&1 | head -20

# Step 4: final full-suite clean run before commit
python -m pytest <test_dir> -v --tb=short

Step 5: Verify and report

Output a structured report:

## Fix Report: <bug summary>

### Root Cause
[1-2 sentence explanation of what was wrong and why]

### Regression Test
- File: <test_file>
- Test: <test_name>
- Confirms: [what behavior the test locks in]

### Changes Made
| File | Change | Lines |
|------|--------|-------|
| path/to/file.py | description of fix | -N/+M |

### Test Results
- Regression test: PASS
- Full suite: PASS (N tests)
- Lint: clean

### Follow-up
- [any related issues or code that should be reviewed]

## Confidence
**Score**: [0.N]
**Gaps**: [e.g., could not reproduce locally, partial traceback only, fix not runtime-tested]
</workflow> <notes>
  • Reproduce first: never fix a bug you can't demonstrate with a test — the test is the proof
  • Minimal fix: change only what's necessary to resolve the root cause — avoid incidental refactoring
  • The regression test is a permanent contribution — it prevents the bug from recurring
  • If the bug is in .claude/ config files: run self-mentor audit + /sync after fixing
  • Related agents: sw-engineer (root cause analysis), qa-specialist (regression test), linting-expert (quality)
  • Follow-up chains:
    • Fix involves structural improvements beyond the bug → /refactor for test-first code quality pass
    • Fix touches non-trivial code paths → /review for full multi-agent quality validation
    • Fix required consistent renames or annotation changes across many files → /codex to delegate the mechanical sweep
</notes>

Source

git clone https://github.com/Borda/.home/blob/main/.claude/skills/fix/SKILL.mdView on GitHub

Overview

Fix software bugs with a disciplined reproduce-first workflow. Before touching code, understand the root cause and capture the bug in a regression test. Then apply the minimal fix, verify with linting and quality checks, and keep the regression test in the codebase to prevent reintroduction.

How This Skill Works

Start by gathering context from the bug report, issue, or error message. Reproduce the bug by creating a regression test that fails, anchoring the behavior in code. Implement the minimal fix, run the regression test, and verify with a full test suite plus linting and type checks; the regression test stays in the codebase to guard against future regressions.

When to Use It

  • You have a bug description, issue number, error message, or failing test and need to diagnose the cause.
  • CI reports a failing test and you want a reproducible path to verification.
  • You're about to touch production code and want to minimize risk by capturing behavior with a regression test first.
  • A recent refactor introduced a subtle bug and you want to confirm it is fully fixed without regressing others.
  • You want to validate the fix with linting and quality checks after code changes.

Quick Start

  1. Step 1: Understand the problem and gather context from the issue or error message; search the codebase if needed.
  2. Step 2: Reproduce the bug by writing a failing regression test that captures the behavior and name it appropriately.
  3. Step 3: Apply the minimal fix, run the regression test, then run the full test suite and linting to ensure quality.

Best Practices

  • Reproduce the bug and write a regression test that fails against current code.
  • Keep the regression test minimal and focused on the broken behavior.
  • Edit only the code necessary to resolve the root cause.
  • Run the regression test, followed by the module full test suite to catch regressions.
  • Document the fix with a brief note linking to the issue and ensure the regression test stays in the codebase.

Example Use Cases

  • Bug described as TypeError when passing None to transform; add regression test test_transform_none_input; fix in the transform logic; run pytest to verify.
  • CI reports a failing test in tests/test_transforms.py::test_none_input; regression test added; minimal code change to fix the path.
  • Data pipeline error shows a KeyError; regression test added to lock in the behavior; fix touches only the affected mapping.
  • After a refactor, an edge-case becomes flaky; regression test added; small targeted fix near the affected function.
  • Bug fixed and regression test kept in the repo to prevent reintroduction; linting and mypy pass before final review.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers