systematic-debugging
Scannednpx machina-cli add skill a5c-ai/babysitter/systematic-debugging --openclawSystematic Debugging
Overview
Random fixes waste time and create new bugs. ALWAYS find root cause before attempting fixes.
Core principle: No fixes without root cause investigation first.
The Four Phases
- Root Cause Investigation - Read errors, reproduce, check changes, gather evidence at component boundaries
- Pattern Analysis - Find working examples, compare against references, identify differences
- Hypothesis and Testing - Form single hypothesis, test minimally, one variable at a time
- Implementation - Create failing test case, implement single fix, verify
When 3+ Fixes Fail
Stop and question the architecture. Pattern of repeated failures indicates architectural problems, not implementation bugs.
Red Flags (STOP and Follow Process)
- "Quick fix for now"
- "Just try changing X"
- Proposing solutions before tracing data flow
- "One more fix attempt" (when already tried 2+)
Agents Used
- Process agents defined in
systematic-debugging.js
Tool Use
Invoke via babysitter process: methodologies/superpowers/systematic-debugging
Source
git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/methodologies/superpowers/skills/systematic-debugging/SKILL.mdView on GitHub Overview
Systematic Debugging is a disciplined approach to bugs, failures, and unexpected behavior. It requires performing root cause investigation before proposing or implementing changes, reducing waste and the risk of new defects. It centers on evidence gathering, pattern analysis, and controlled testing across components.
How This Skill Works
Follow the four phases: Root Cause Investigation, Pattern Analysis, Hypothesis and Testing, and Implementation. Start by reading errors, reproducing the issue, and gathering evidence at component boundaries; then compare against references to spot differences; test a single hypothesis with minimal changes; finally implement a focused fix and verify.
When to Use It
- When you encounter a bug, test failure, or unexpected behavior and want to understand it first.
- When prior quick fixes haven't resolved the issue after multiple attempts.
- When failures span multiple components and data flow needs tracing.
- Before proposing any fix, ensure you have a traced root cause.
- When 3+ fixes have failed, prompting architectural reassessment.
Quick Start
- Step 1: Reproduce the issue and collect errors, tracing the problem across component boundaries.
- Step 2: Conduct Root Cause Investigation and Pattern Analysis to form a single hypothesis.
- Step 3: Implement a minimal fix, add/adjust a failing test, and verify the outcome.
Best Practices
- Always start with Root Cause Investigation: read errors, reproduce, and gather evidence at component boundaries.
- Perform Pattern Analysis: find working examples, compare against references, and identify differences.
- Use Hypothesis and Testing: form a single hypothesis and test minimally, one variable at a time.
- Proceed to Implementation with a controlled change and a failing test case to verify the fix.
- If failures persist (3+ fixes), pause and question the architecture rather than continuing fixes.
Example Use Cases
- Debugging a flaky API integration by tracing data flow at component boundaries and comparing against a reference implementation.
- A regression after a refactor where boundary interactions reveal where the break occurred.
- Intermittent CI failure analyzed by collecting evidence and contrasting with a known good run.
- Bug introduced after a dependency update, identified by differences from expected behavior.
- Race condition in asynchronous processing resolved by testing a single hypothesis with controlled timing.