Get the FREE Ultimate OpenClaw Setup Guide →

compound

npx machina-cli add skill jikig-ai/soleur/compound --openclaw
Files (1)
SKILL.md
19.1 KB

/compound

Coordinate multiple subagents working in parallel to document a recently solved problem.

Purpose

Captures problem solutions while context is fresh, creating structured documentation in knowledge-base/learnings/ with YAML frontmatter for searchability and future reference. Uses parallel subagents for maximum efficiency.

Why "compound"? Each documented solution compounds your team's knowledge. The first time you solve a problem takes research. Document it, and the next occurrence takes minutes. Knowledge compounds.

Usage

skill: soleur:compound               # Document the most recent fix
skill: soleur:compound [brief context]  # Provide additional context hint
skill: soleur:compound --headless    # Headless mode: auto-approve all prompts

Headless Mode Detection

If $ARGUMENTS contains --headless, set HEADLESS_MODE=true. Strip --headless from $ARGUMENTS before processing remaining args.

Branch safety check: If HEADLESS_MODE=true, run git branch --show-current. If the result is main or master, abort immediately with: "Error: headless compound cannot run on main/master. Checkout a feature branch first." This is defense-in-depth alongside PreToolUse hooks.

When HEADLESS_MODE=true, forward --headless to the compound-capture invocation (e.g., skill: soleur:compound-capture --headless).

Phase 0: Setup

Load project conventions:

# Load project conventions
if [[ -f "CLAUDE.md" ]]; then
  cat CLAUDE.md
fi

Read CLAUDE.md if it exists - apply project conventions during documentation.

Phase 0.5: Session Error Inventory (MANDATORY)

HARD RULE: Before writing any learning, enumerate ALL errors encountered in this session. Output a numbered list to the user. This step cannot be skipped even if the session felt clean.

Check for session-state.md: Run git branch --show-current. If on a feat-* branch, check if knowledge-base/specs/feat-<name>/session-state.md exists. If it does, read it and include any forwarded errors from ### Errors in the inventory. These errors occurred in preceding pipeline phases (e.g., plan+deepen subagent) whose context was compacted.

Include:

  • Errors forwarded from session-state.md (if present)
  • Skill or command not found errors (e.g., wrong plugin namespace)
  • Wrong file paths, directories, or branch confusion
  • Failed bash commands or unexpected exit codes
  • API errors or unexpected responses
  • Wrong assumptions that required backtracking
  • Tools or agents that returned errors
  • Permission denials or hook rejections

If genuinely no errors occurred (including no forwarded errors), output: "Session error inventory: none detected."

This list feeds directly into the Session Errors section of the learning document. Every item on this list MUST appear in the final output unless the user explicitly excludes it.

FAILURE MODE THIS PREVENTS: Compound runs in pipeline mode, the model judges the session as "clean," and silently drops errors that happened earlier in the conversation (e.g., a skill-not-found error from one-shot Step 1 gets omitted because compound focuses only on the main implementation task).

Execution Strategy: Parallel Subagents

This command launches multiple specialized subagents IN PARALLEL to maximize efficiency:

1. Context Analyzer (Parallel)

  • Extracts conversation history
  • Identifies problem type, component, symptoms
  • Validates against solution schema
  • Returns: YAML frontmatter skeleton

2. Solution Extractor (Parallel)

  • Analyzes all investigation steps
  • Identifies root cause
  • Extracts working solution with code examples
  • Returns: Solution content block

3. Related Docs Finder (Parallel)

  • Searches knowledge-base/learnings/ for related documentation
  • Identifies cross-references and links
  • Finds related GitHub issues
  • Returns: Links and relationships

4. Prevention Strategist (Parallel)

  • Develops prevention strategies
  • Creates best practices guidance
  • Generates test cases if applicable
  • Returns: Prevention/testing content

5. Category Classifier (Parallel)

  • Determines optimal knowledge-base/learnings/ category
  • Validates category against schema
  • Suggests filename based on slug
  • Returns: Final path and filename

6. Documentation Writer (Parallel)

  • Assembles complete markdown file
  • Validates YAML frontmatter
  • Formats content for readability
  • Creates the file in correct location

7. Optional: Specialized Agent Invocation (Post-Documentation)

Based on problem type detected, automatically invoke applicable agents:

  • performance_issue --> performance-oracle
  • security_issue --> security-sentinel
  • database_issue --> data-integrity-guardian
  • Any code-heavy issue --> kieran-rails-reviewer + code-simplicity-reviewer

Phase 1.5: Deviation Analyst (Sequential)

After all parallel subagents complete and before Constitution Promotion, scan the session for workflow deviations against hard rules. This phase runs sequentially (not as a parallel subagent) to respect the max-5 parallel subagent limit.

Purpose

Close the gap between "we learned X" and "X is now enforced." The project has proven that hooks beat documentation — all existing PreToolUse hooks were added after prose rules failed. This phase detects deviations and proposes the strongest viable enforcement.

Procedure

  1. Gather rules. Read AGENTS.md and extract only ## Hard Rules and ## Workflow Gates items (Always/Never). Skip Prefer rules — they are advisory and flagging them adds noise.

  2. Gather session evidence. Two sources:

    • session-state.md (if present): read knowledge-base/specs/feat-<name>/session-state.md for forwarded errors from preceding pipeline phases (pre-compaction deviations)
    • Current context: scan the conversation for post-compaction actions — tool calls, command outputs, file edits
  3. Detect deviations. For each hard rule, check if session evidence shows a violation. Common examples:

    • Editing files in main repo when a worktree is active
    • Committing directly to main
    • Running git stash in a worktree
    • Skipping compound before commit
    • Treating a failed command as success
  4. Propose enforcement. For each detected deviation, determine if an existing hook already covers it. If yes, note the existing hook and skip. If no, propose enforcement following the hierarchy:

    • PreToolUse hook (preferred) — mechanical prevention, cannot be bypassed
    • Skill instruction — checked when skill runs, can be overridden
    • Prose rule (last resort) — requires agent compliance, weakest enforcement
  5. Format output. For each deviation, produce:

    ### Deviation: [short description]
    - **Rule violated:** [exact text from AGENTS.md or constitution.md]
    - **Evidence:** [what happened in the session]
    - **Existing enforcement:** [hook name if already covered, or "none"]
    - **Proposed enforcement:** [hook/skill_instruction/prose_rule]
    

    For hook proposals, include an inline draft script following .claude/hooks/ conventions:

    #!/usr/bin/env bash
    # PreToolUse hook: [what it blocks]
    # Source rule: [AGENTS.md or constitution.md reference]
    set -euo pipefail
    INPUT=$(cat)
    # [detection logic]
    # If violation detected:
    # jq -n '{ hookSpecificOutput: { permissionDecision: "deny", permissionDecisionReason: "BLOCKED: [reason]" } }'
    
  6. Feed into Constitution Promotion. Present each deviation to the user via the existing Accept/Skip/Edit gate in the Constitution Promotion section below. Accepted hook proposals should be manually copied to .claude/hooks/ after testing — never auto-install.

Empty Case

If no deviations are detected, output: "Deviation Analyst: no violations found." and proceed to Knowledge Base Integration.

Knowledge Base Integration

If knowledge-base/ directory exists, compound saves learnings there and offers constitution promotion:

Save Learning to Knowledge Base

If knowledge-base/ directory exists, save the learning file to knowledge-base/learnings/YYYY-MM-DD-<topic>.md (using today's date). Otherwise, fall back to knowledge-base/learnings/<category>/<topic>.md.

Learning format for knowledge-base/learnings/:

# Learning: [topic]

## Problem
[What we encountered]

## Solution
[How we solved it]

## Key Insight
[The generalizable lesson]

## Tags
category: [category]
module: [module]

Constitution Promotion (Manual or Auto)

HARD RULE: This phase MUST run even when compound is invoked inside an automated pipeline (one-shot, ship). The model has historically rationalized skipping this as "pipeline mode optimization" -- that is a protocol violation. Constitution promotion and route-to-definition are the phases that prevent repeated mistakes across sessions. If the pipeline is time-constrained, present proposals with a 5-second timeout per item, but never skip entirely.

Headless mode: If HEADLESS_MODE=true, auto-promote using LLM judgment. Review recent learnings, determine if any warrant constitution promotion, select the domain and category using LLM judgment, generate the principle text, and check for duplicates via substring match against existing rules in constitution.md. Skip any principle that is already covered. Append non-duplicate principles and commit. Do not prompt the user. For deviation analyst proposals, auto-accept hook proposals that have clear rule-to-hook mappings and skip ambiguous ones.

Interactive mode: After saving the learning, present two categories of proposals:

1. Deviation Analyst proposals (if any): If Phase 1.5 produced deviations, present each one with Accept/Skip/Edit. For accepted hook proposals, display the draft script and instruct the user to manually copy it to .claude/hooks/ after testing. For accepted skill instruction or prose rule proposals, apply the edit to the target file.

2. Constitution promotion: Prompt the user:

Question: "Promote anything to constitution?"

If user says yes:

  1. Show recent learnings (last 5 from knowledge-base/learnings/)
  2. User selects which learning to promote
  3. Ask: "Which domain? (Code Style / Architecture / Testing)"
  4. Ask: "Which category? (Always / Never / Prefer)"
  5. User writes the principle (one line, actionable)
  6. Append to knowledge-base/overview/constitution.md under the correct section
  7. Commit: git commit -m "constitution: add <domain> <category> principle"

If user says no: Continue to next step

Route Learning to Definition

HARD RULE: This phase MUST run even in automated pipelines. See constitution promotion rule above.

After constitution promotion, compound routes the captured learning to the skill, agent, or command definition that was active in the session. This feeds insights back into the instructions that directly govern behavior, preventing repeated mistakes.

  1. Detect which skills, agents, or commands were invoked in this conversation. Also check session-state.md ### Components Invoked for components from preceding pipeline phases.
  2. Propose a one-line bullet edit to the most relevant section of the target definition file
  3. Headless mode: If HEADLESS_MODE=true, auto-accept the LLM-proposed edit without prompting.
  4. Interactive mode: User confirms with Accept/Skip/Edit

See compound-capture Step 8 for the full flow.

Graceful degradation: Skips if plugins/soleur/ does not exist or no components detected in the session.

Managing Learnings (Update/Archive/Delete)

Update an existing learning: Read the file in knowledge-base/learnings/, apply changes, and commit with git commit -m "learning: update <topic>".

Archive an outdated learning: Move it to knowledge-base/learnings/archive/: mkdir -p knowledge-base/learnings/archive && git add knowledge-base/learnings/<category>/<file>.md && git mv knowledge-base/learnings/<category>/<file>.md knowledge-base/learnings/archive/. The git add ensures the file is tracked before git mv.Commit with git commit -m "learning: archive <topic>".

Delete a learning: Only with user confirmation. git rm knowledge-base/learnings/<category>/<file>.md and commit.

Managing Constitution Rules (Edit/Remove)

Edit a rule: Read knowledge-base/overview/constitution.md, find the rule, modify it, commit with git commit -m "constitution: update <domain> <category> rule".

Remove a rule: Read knowledge-base/overview/constitution.md, remove the bullet point, commit with git commit -m "constitution: remove <domain> <category> rule".

Automatic Consolidation & Archival (feature branches)

On feature branches (feat-*, feat/*, fix-*, or fix/*), consolidation runs automatically after the learning is documented and before the decision menu. This ensures artifacts are always cleaned up as part of the standard compound flow, rather than relying on a manual menu choice.

The automatic consolidation:

  1. Discovers artifacts -- extracts the feature slug by stripping feat/, feat-, fix/, or fix- prefix from the branch name, then globs knowledge-base/{brainstorms,plans}/*<slug>* and knowledge-base/specs/feat-<slug>/ (excluding */archive/)
  2. Extracts knowledge -- a single agent reads all artifacts and proposes updates to constitution.md, component docs, and overview README.md
  3. Approval flow -- Headless mode: auto-accept all proposals (idempotency still checked via substring match). Interactive mode: proposals presented one at a time with Accept/Skip/Edit; idempotency checked via substring match
  4. Archives sources -- runs bash ./plugins/soleur/skills/archive-kb/scripts/archive-kb.sh to move all discovered artifacts to archive/ subdirectories via git mv with YYYYMMDD-HHMMSS timestamp prefix. Headless mode: auto-confirm archival without prompting
  5. Single commit -- overview edits and archival moves committed together for clean git revert

If no artifacts are found for the feature slug, consolidation is skipped silently. See the compound-capture skill for full implementation details.

Worktree Cleanup (Manual)

Headless mode: If HEADLESS_MODE=true, skip worktree cleanup entirely (cleanup-merged handles this post-merge).

Interactive mode: At the end, if on a feature branch:

Question: "Feature complete? Clean up worktree?"

If user says yes:

git worktree remove .worktrees/feat-<name>

If user says no: Done

What It Captures

  • Problem symptom: Exact error messages, observable behavior
  • Investigation steps tried: What didn't work and why
  • Root cause analysis: Technical explanation
  • Working solution: Step-by-step fix with code examples
  • Prevention strategies: How to avoid in future
  • Session errors: Process mistakes, failed commands, and wrong approaches from the session
  • Cross-references: Links to related issues and docs

Preconditions

<preconditions enforcement="advisory"> <check condition="problem_solved"> Problem has been solved (not in-progress) </check> <check condition="solution_verified"> Solution has been verified working </check> <check condition="non_trivial"> Non-trivial problem (not simple typo or obvious error) </check> </preconditions>

What It Creates

Organized documentation:

  • File: knowledge-base/learnings/[category]/[filename].md

Categories auto-detected from problem:

  • build-errors/
  • test-failures/
  • runtime-errors/
  • performance-issues/
  • database-issues/
  • security-issues/
  • ui-bugs/
  • integration-issues/
  • logic-errors/

Success Output

✓ Parallel documentation generation complete

Primary Subagent Results:
  ✓ Context Analyzer: Identified performance_issue in brief_system
  ✓ Solution Extractor: Extracted 3 code fixes
  ✓ Related Docs Finder: Found 2 related issues
  ✓ Prevention Strategist: Generated test cases
  ✓ Category Classifier: knowledge-base/learnings/performance-issues/
  ✓ Documentation Writer: Created complete markdown

Specialized Agent Reviews (Auto-Triggered):
  ✓ performance-oracle: Validated query optimization approach
  ✓ kieran-rails-reviewer: Code examples meet Rails standards
  ✓ code-simplicity-reviewer: Solution is appropriately minimal
  ✓ every-style-editor: Documentation style verified

File created:
- knowledge-base/learnings/performance-issues/n-plus-one-brief-generation.md

This documentation will be searchable for future reference when similar
issues occur in the Email Processing or Brief System modules.

What's next?  (Headless mode: auto-selects "Continue workflow")
1. Continue workflow (recommended)
2. Add to Required Reading
3. Link related documentation
4. Update other references
5. View documentation
6. Other

The Compounding Philosophy

This creates a compounding knowledge system:

  1. First time you solve "N+1 query in brief generation" → Research (30 min)
  2. Document the solution → knowledge-base/learnings/performance-issues/n-plus-one-briefs.md (5 min)
  3. Next time similar issue occurs → Quick lookup (2 min)
  4. Knowledge compounds → Team gets smarter

The feedback loop:

Build → Test → Find Issue → Research → Improve → Document → Validate → Deploy
    ↑                                                                      ↓
    └──────────────────────────────────────────────────────────────────────┘

Each unit of engineering work should make subsequent units of work easier—not harder.

Auto-Invoke

<auto_invoke> <trigger_phrases> - "that worked" - "it's fixed" - "working now" - "problem solved" </trigger_phrases>

<manual_override> Use skill: soleur:compound [context] to document immediately without waiting for auto-detection. </manual_override> </auto_invoke>

Routes To

compound-capture skill

Applicable Specialized Agents

Based on problem type, these agents can enhance documentation:

Code Quality & Review

  • kieran-rails-reviewer: Reviews code examples for Rails best practices
  • code-simplicity-reviewer: Ensures solution code is minimal and clear
  • pattern-recognition-specialist: Identifies anti-patterns or repeating issues

Specific Domain Experts

  • performance-oracle: Analyzes performance_issue category solutions
  • security-sentinel: Reviews security_issue solutions for vulnerabilities
  • data-integrity-guardian: Reviews database_issue migrations and queries

Enhancement & Documentation

  • best-practices-researcher: Enriches solution with industry best practices
  • every-style-editor: Reviews documentation style and clarity
  • framework-docs-researcher: Links to Rails/gem documentation references

When to Invoke

  • Auto-triggered (optional): Agents can run post-documentation for enhancement
  • Manual trigger: User can invoke agents after soleur:compound completes for deeper review

Related Commands

  • /research [topic] - Deep investigation (searches knowledge-base/learnings/ for patterns)
  • soleur:plan skill - Planning workflow (references documented solutions)

Source

git clone https://github.com/jikig-ai/soleur/blob/main/plugins/soleur/skills/compound/SKILL.mdView on GitHub

Overview

Compound coordinates parallel subagents to capture fresh problem solutions and turn them into structured knowledge. It writes YAML-frontmatted docs into knowledge-base/learnings/ for quick search and future reference, accelerating onboarding and incident response.

How This Skill Works

The skill runs parallel subagents (Context Analyzer, Solution Extractor, Related Docs Finder) to generate a frontmatter skeleton and a solution content block. It enforces Phase 0.5 session error inventory and leverages project conventions from CLAUDE.md when present. In headless mode, prompts are auto-approved and arguments are forwarded to the appropriate capture step.

When to Use It

  • After solving a problem to capture the fix while context is fresh
  • When you want to add brief context hints to improve future reuse
  • When headless mode is enabled to automate learning capture
  • When you need to apply project conventions (e.g., CLAUDE.md) to documentation
  • When building a knowledge base to speed up future incident response

Quick Start

  1. Step 1: skill: soleur:compound # Document the most recent fix
  2. Step 2: [brief context] # Provide additional context hint
  3. Step 3: --headless # Headless mode: auto-approve prompts

Best Practices

  • Document immediately after solving to maximize accuracy
  • Leverage parallel subagents to speed up documentation
  • Include YAML frontmatter with searchable metadata (problem type, root cause, references)
  • Provide concise context hints to aid future contributors
  • Follow CLAUDE.md conventions and avoid sensitive data

Example Use Cases

  • Documented a recently fixed memory leak in a Python service to aid onboarding
  • Captured a race condition fix in a Node.js API for future reference
  • Recorded an ETL data-mismatch fix to improve data quality checks
  • Bottleneck config issue in a Kubernetes deployment documented for ops
  • UI bug and regression notes captured to prevent recurrence

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers