Get the FREE Ultimate OpenClaw Setup Guide →

bug-review

Scanned
npx machina-cli add skill athola/claude-night-market/bug-review --openclaw
Files (1)
SKILL.md
6.6 KB

Table of Contents

Bug Review Workflow

Systematic bug identification and fixing with language-specific expertise.

Quick Start

/bug-review

Verification: Run the command with --help flag to verify availability.

When To Use

  • Reviewing code for potential bugs
  • After receiving bug reports
  • Before major releases
  • During security audits
  • Investigating production issues

When NOT To Use

  • Test coverage audit - use test-review instead

Required TodoWrite Items

  1. bug-review:language-detected
  2. bug-review:repro-plan
  3. bug-review:defects-documented
  4. bug-review:fixes-prepared
  5. bug-review:verification-plan

Progressive Loading

Load additional context as needed:

  • Language Detection: @include modules/language-detection.md - Manifest heuristics, expertise framing, version constraints
  • Defect Documentation: @include modules/defect-documentation.md - Severity classification, root cause analysis, static analyzers
  • Fix Preparation: @include modules/fix-preparation.md - Minimal patches, idiomatic patterns, test coverage

Workflow

Step 1: Detect Languages (bug-review:language-detected)

Identify dominant languages using manifest files (Cargo.toml → Rust, package.json → Node, etc.).

State expertise persona appropriate for the language ecosystem.

Note version constraints (MSRV, Python versions, Node engines).

Progressive: Load modules/language-detection.md for detailed manifest heuristics.

Step 2: Plan Reproduction (bug-review:repro-plan)

Identify reproduction methods:

  • Unit/integration test suites
  • Fuzzing tools
  • Manual reproduction commands

Document exact commands:

cargo test -p core
pytest tests/test_api.py
npm test -- pkg

Verification: Run pytest -v tests/test_api.py to verify.

Capture blockers and propose mocks when dependencies unavailable.

Step 3: Document Defects (bug-review:defects-documented)

Review code line-by-line, logging each bug with:

  • File:line reference: Precise location
  • Severity: Critical, High, Medium, Low
  • Root cause: Logic error, API misuse, concurrency, resource leak
  • Impact: What breaks and how

Run static analyzers (cargo clippy, ruff check, golangci-lint, eslint).

Use imbue:evidence-logging for reproducible capture.

Progressive: Load modules/defect-documentation.md for classification details and analyzer commands.

Step 4: Prepare Fixes (bug-review:fixes-prepared)

Draft minimal, idiomatic patches using language best practices:

  • Guard clauses (Rust: pattern matching, Python: early returns)
  • Resource cleanup (Go: defer, Python: context managers)
  • Error propagation (Rust: ?, Go: wrapped errors)

Create tests following Red → Green pattern:

  1. Write failing test
  2. Apply minimal fix
  3. Verify test passes

Progressive: Load modules/fix-preparation.md for language-specific patterns and test strategies.

Step 5: Verification Plan (bug-review:verification-plan)

Execute reproduction steps with fixes applied.

Capture evidence:

  • Test output logs
  • Benchmark comparisons
  • Coverage reports

Document remaining risks using imbue:diff-analysis/modules/risk-assessment-framework.

Assign owners and deadlines for follow-up items.

Defect Classification (Condensed)

Severity: Critical (crash/data loss) → High (broken features) → Medium (degraded UX) → Low (edge cases)

Root Causes: Logic errors | API misuse | Concurrency issues | Resource leaks | Validation gaps

Output Format

## Summary
[Brief scope description]

## Defects Found
### [D1] file.rs:142 - Title
- Severity: High
- Root Cause: Logic error
- Impact: Data corruption possible
- Fix: [description]

## Proposed Fixes
### Fix for D1
[code diff with explanation]

## Test Updates
[new/updated tests with Red → Green verification]

## Evidence
- Commands executed
- Logs and outputs
- External references

Verification: Run pytest -v to verify tests pass.

Best Practices

  1. Evidence-based: Every finding has file:line reference
  2. Reproducible: Clear steps to reproduce each bug
  3. Minimal fixes: Smallest change that fixes the issue
  4. Test coverage: Every fix has corresponding test
  5. Risk awareness: Document remaining risks with severity scoring

Exit Criteria

  • All defects documented with precise references
  • Fixes prepared with test coverage verified
  • Verification plan includes commands and expected outputs
  • Remaining risks assessed and owners assigned

Troubleshooting

Common Issues

Command not found Ensure all dependencies are installed and in PATH

Permission errors Check file permissions and run with appropriate privileges

Unexpected behavior Enable verbose logging with --verbose flag

Source

git clone https://github.com/athola/claude-night-market/blob/master/plugins/pensive/skills/bug-review/SKILL.mdView on GitHub

Overview

Bug Review provides a structured, evidence-driven workflow for systematic bug hunting. It combines language-aware defect detection with precise defect documentation, reproduction planning, fix preparation, and verification planning. This evidence trail accelerates reliable fixes in complex codebases.

How This Skill Works

Key steps include detecting dominant languages via manifests, planning reproducible steps, logging defects with file:line precision, and using integrated tools like defect-tracker, fix-generator, and verification-runner. It relies on progressive loading to pull modules (language-detection, defect-documentation, fix-preparation) as needed, producing an output that can be validated by the verification plan.

When to Use It

  • Reviewing code for potential bugs
  • After receiving bug reports
  • Before major releases
  • During security audits
  • Investigating production issues

Quick Start

  1. Step 1: Run /bug-review to initialize the workflow
  2. Step 2: Run with --help to verify availability
  3. Step 3: Follow the documented steps: detect languages, plan reproduction, document defects, prepare fixes, and set up the verification plan

Best Practices

  • Use progressive loading to pull relevant modules as needed
  • Log defects with precise File:Line references and clear root-cause notes
  • Document exact reproduction commands and any blockers
  • Tie fixes to an explicit verification plan and tests to ensure coverage
  • Maintain an evidence trail across defect-tracker, fix-generator, and verification-runner outputs

Example Use Cases

  • Tracing a race condition in a Rust service and logging precise file:line where it occurs
  • Reproducing a flaky Node API bug using exact commands and outputs
  • Security audit: validating input sanitization bug across modules
  • Production incident: memory leak traced with defect documentation and verification plan
  • CI regression: root cause identified by diff-analysis and risk assessment framework

Frequently Asked Questions

Add this skill to your agents

Related Skills

Sponsor this space

Reach thousands of developers