Get the FREE Ultimate OpenClaw Setup Guide →

systematic-debugging

Scanned
npx machina-cli add skill parthalon025/autonomous-coding-toolkit/systematic-debugging --openclaw
Files (1)
SKILL.md
4.3 KB

Systematic Debugging

Overview

Random fixes waste time and create new bugs. Quick patches mask underlying issues.

Core principle: ALWAYS find root cause before attempting fixes. Symptom fixes are failure.

Violating the letter of this process is violating the spirit of debugging.

The Iron Law

NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST

If you haven't completed Phase 1, you cannot propose fixes.

When to Use

Use for ANY technical issue:

  • Test failures
  • Bugs in production
  • Unexpected behavior
  • Performance problems
  • Build failures
  • Integration issues

The Four Phases

You MUST complete each phase before proceeding to the next.

Phase 1: Root Cause Investigation

BEFORE attempting ANY fix:

  1. Read Error Messages Carefully

    • Don't skip past errors or warnings
    • Read stack traces completely
    • Note line numbers, file paths, error codes
  2. Reproduce Consistently

    • Can you trigger it reliably?
    • What are the exact steps?
    • If not reproducible → gather more data, don't guess
  3. Check Recent Changes

    • What changed that could cause this?
    • Git diff, recent commits
    • New dependencies, config changes
  4. Gather Evidence in Multi-Component Systems

    WHEN system has multiple components:

    BEFORE proposing fixes, add diagnostic instrumentation:

    For EACH component boundary:
      - Log what data enters component
      - Log what data exits component
      - Verify environment/config propagation
      - Check state at each layer
    
    Run once to gather evidence showing WHERE it breaks
    THEN analyze evidence to identify failing component
    THEN investigate that specific component
    
  5. Trace Data Flow

    • Where does bad value originate?
    • What called this with bad value?
    • Keep tracing up until you find the source
    • Fix at source, not at symptom

Phase 2: Pattern Analysis

  1. Find Working Examples - Locate similar working code
  2. Compare Against References - Read reference implementation COMPLETELY
  3. Identify Differences - List every difference, however small
  4. Understand Dependencies - What other components does this need?

Phase 3: Hypothesis and Testing

  1. Form Single Hypothesis - "I think X is the root cause because Y"
  2. Test Minimally - SMALLEST possible change, one variable at a time
  3. Verify Before Continuing - Did it work? If not, form NEW hypothesis
  4. When You Don't Know - Say so. Don't pretend.

Phase 4: Implementation

  1. Create Failing Test Case - Simplest possible reproduction
  2. Implement Single Fix - ONE change at a time, no "while I'm here" improvements
  3. Verify Fix - Test passes? No other tests broken?
  4. If Fix Doesn't Work - If < 3 attempts: return to Phase 1. If >= 3: question the architecture.

Red Flags - STOP and Follow Process

  • "Quick fix for now, investigate later"
  • "Just try changing X and see if it works"
  • "I don't fully understand but this might work"
  • Proposing solutions before tracing data flow
  • "One more fix attempt" (when already tried 2+)

ALL of these mean: STOP. Return to Phase 1.

Common Rationalizations

ExcuseReality
"Issue is simple, don't need process"Simple issues have root causes too.
"Emergency, no time for process"Systematic debugging is FASTER than thrashing.
"I see the problem, let me fix it"Seeing symptoms ≠ understanding root cause.
"One more fix attempt" (after 2+ failures)3+ failures = architectural problem.

Quick Reference

PhaseKey ActivitiesSuccess Criteria
1. Root CauseRead errors, reproduce, check changesUnderstand WHAT and WHY
2. PatternFind working examples, compareIdentify differences
3. HypothesisForm theory, test minimallyConfirmed or new hypothesis
4. ImplementationCreate test, fix, verifyBug resolved, tests pass

Supporting Techniques

  • root-cause-tracing.md - Trace bugs backward through call stack
  • defense-in-depth.md - Add validation at multiple layers after finding root cause
  • condition-based-waiting.md - Replace arbitrary timeouts with condition polling

Source

git clone https://github.com/parthalon025/autonomous-coding-toolkit/blob/main/skills/systematic-debugging/SKILL.mdView on GitHub

Overview

Systematic debugging enforces finding the root cause before proposing fixes, avoiding quick patches that mask underlying issues. It emphasizes evidence gathering, reproducibility, and tracing data flow across components to ensure robust solutions.

How This Skill Works

Follow the four phases: Phase 1 Root Cause Investigation, Phase 2 Pattern Analysis, Phase 3 Hypothesis and Testing, Phase 4 Implementation. Before attempting fixes, read error messages carefully, reproduce consistently, check recent changes, and trace data flow across component boundaries to identify the source of the problem and fix at its origin.

When to Use It

  • Test failures
  • Bugs in production
  • Unexpected behavior
  • Performance problems
  • Build failures

Quick Start

  1. Step 1: Enter Phase 1 – read errors, reproduce reliably, and gather multi-component evidence.
  2. Step 2: Move to Phase 2/3 – analyze patterns, form a single hypothesis, and test minimally.
  3. Step 3: Enter Phase 4 – implement the smallest fix, verify all tests, and re-evaluate if needed.

Best Practices

  • Read error messages carefully and reproduce consistently; don’t skip stack traces or codes.
  • Check recent changes and gather evidence across all component boundaries before proposing fixes.
  • Trace data flow to locate the source of the bad value or state and fix at the source.
  • Form a single hypothesis and test it with the smallest possible change one variable at a time.
  • If fixes don’t work after multiple attempts, return to Phase 1 or question the architecture.

Example Use Cases

  • Debugging a failing unit test in CI by tracing the erroneous input through components.
  • Resolving a production outage by tracing a request path across microservices and identifying the faulty boundary.
  • Investigating a sudden performance regression after a release by comparing with a known good baseline.
  • Diagnosing a flaky build failure caused by a transitive dependency update and reproducing across environments.
  • Fixing an integration issue between two subsystems by mapping data flow and validating environment propagation.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers