Get the FREE Ultimate OpenClaw Setup Guide →

systematic-debugging

npx machina-cli add skill CodingCossack/agent-skills-library/systematic-debugging --openclaw
Files (1)
SKILL.md
4.5 KB

Systematic Debugging

Core principle: Find root cause before attempting fixes. Symptom fixes are failure.

NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST

Phase 1: Root Cause Investigation

BEFORE attempting ANY fix:

  1. Read Error Messages Carefully

    • Read stack traces completely
    • Note line numbers, file paths, error codes
    • Don't skip warnings
  2. Reproduce Consistently

    • What are the exact steps?
    • If not reproducible → gather more data, don't guess
  3. Check Recent Changes

    • Git diff, recent commits
    • New dependencies, config changes
    • Environmental differences
  4. Gather Evidence in Multi-Component Systems

    WHEN system has multiple components (CI → build → signing, API → service → database):

    Add diagnostic instrumentation before proposing fixes:

    For EACH component boundary:
      - Log what data enters/exits component
      - Verify environment/config propagation
      - Check state at each layer
    
    Run once to gather evidence → analyze → identify failing component
    

    Example:

    # Layer 1: Workflow
    echo "=== Secrets available: ==="
    echo "IDENTITY: ${IDENTITY:+SET}${IDENTITY:-UNSET}"
    
    # Layer 2: Build script
    env | grep IDENTITY || echo "IDENTITY not in environment"
    
    # Layer 3: Signing
    security find-identity -v
    
  5. Trace Data Flow

    See references/root-cause-tracing.md for backward tracing technique.

    Quick version: Where does bad value originate? Trace up call chain until you find the source. Fix at source.

Phase 2: Pattern Analysis

  1. Find Working Examples - Similar working code in codebase
  2. Compare Against References - Read reference implementations COMPLETELY, don't skim
  3. Identify Differences - List every difference, don't assume "that can't matter"
  4. Understand Dependencies - Components, config, environment, assumptions

Phase 3: Hypothesis and Testing

  1. Form Single Hypothesis - "I think X is root cause because Y" - be specific
  2. Test Minimally - SMALLEST possible change, one variable at a time
  3. Verify - Worked → Phase 4. Didn't work → form NEW hypothesis, don't stack fixes
  4. When You Don't Know - Say so. Don't pretend.

Phase 4: Implementation

  1. Create Failing Test Case

    • Use the test-driven-development skill
    • MUST have before fixing
  2. Implement Single Fix

    • ONE change at a time
    • No "while I'm here" improvements
  3. Verify Fix

    • Test passes? Other tests still pass? Issue resolved?
  4. If Fix Doesn't Work

    • Count attempts
    • If < 3: Return to Phase 1 with new information
    • If ≥ 3: Escalate (below)

Escalation: 3+ Failed Fixes

Pattern indicating architectural problem:

  • Each fix reveals new problems elsewhere
  • Fixes require massive refactoring
  • Shared state/coupling keeps surfacing

Action: STOP. Question fundamentals:

  • Is this pattern fundamentally sound?
  • Are we continuing through inertia?
  • Refactor architecture vs. continue fixing symptoms?

Discuss with human partner before more fix attempts. This is wrong architecture, not failed hypothesis.

Red Flags → STOP and Return to Phase 1

If you catch yourself thinking:

  • "Quick fix for now, investigate later"
  • "Just try changing X"
  • "I'll skip the test"
  • "It's probably X"
  • "Pattern says X but I'll adapt it differently"
  • Proposing solutions before tracing data flow
  • "One more fix" after 2+ failures

Human Signals You're Off Track

  • "Is that not happening?" → You assumed without verifying
  • "Will it show us...?" → You should have added evidence gathering
  • "Stop guessing" → You're proposing fixes without understanding
  • "Ultrathink this" → Question fundamentals
  • Frustrated "We're stuck?" → Your approach isn't working

Response: Return to Phase 1.

Supporting Techniques

Reference files in references/:

  • root-cause-tracing.md - Trace bugs backward through call stack
  • defense-in-depth.md - Add validation at multiple layers after finding root cause
  • condition-based-waiting.md - Replace arbitrary timeouts with condition polling

Related skills:

  • test-driven-development - Creating failing test case (Phase 4)
  • verification-before-completion - Verify fix before claiming success

Source

git clone https://github.com/CodingCossack/agent-skills-library/blob/main/skills/systematic-debugging/SKILL.mdView on GitHub

Overview

Systematic debugging emphasizes finding the root cause before attempting fixes. It uses a structured, phase-based approach to uncover where a bug originates, especially when symptoms are misleading or fixes have failed. This helps you avoid firefighting and makes repairs durable.

How This Skill Works

The process unfolds in four phases: Root Cause Investigation, Pattern Analysis, Hypothesis and Testing, and Implementation, with escalation if fixes fail. Practically, you read error messages, reproduce issues, review changes, and trace data flow across components, then test hypotheses with single, minimal changes before confirming a fix.

When to Use It

  • When bugs, test failures, or unexpected behavior have non-obvious causes
  • After multiple fix attempts have failed without identifying a root cause
  • In multi-component systems (CI/build, signing, API/service, database) to gather evidence at each boundary
  • When tracing data flow to locate the origin of a bad value or state
  • When forming a single hypothesis and testing minimally before implementing a fix

Quick Start

  1. Step 1: Read error messages carefully, reproduce consistently, and review recent changes
  2. Step 2: Instrument across component boundaries and trace data flow to locate the source
  3. Step 3: Create a failing test, implement a single fix, verify, and escalate if needed

Best Practices

  • Read error messages and complete stack traces; note line numbers, file paths, error codes, and warnings
  • Reproduce the issue consistently with exact steps; if not reproducible, gather more data and don't guess
  • Check recent changes, diffs, and environmental differences that could affect behavior
  • Gather evidence across all component boundaries by logging inputs/outputs and environment propagation
  • Trace data flow to the source and fix at the origin rather than addressing symptoms

Example Use Cases

  • Debugging a multi-component CI→build→signing pipeline where an identity value is lost at a boundary; instrument each layer to confirm where it drops
  • Investigating a flaky test after a last-commit dependency update by comparing diffs against reference implementations and complete behavior
  • A service failure due to an environment variable not propagating from staging to production; collect logs, env dumps, and trace data flow across components
  • A failure where the stack trace points to a library but the root cause is upstream data validation; trace data flow to locate the original source
  • Encountering repeated fixes that reveal a fundamental architectural mismatch; escalate and discuss with a human partner

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers