Get the FREE Ultimate OpenClaw Setup Guide →

evolving-loop

npx machina-cli add skill claude-world/director-mode-lite/evolving-loop --openclaw
Files (1)
SKILL.md
7.2 KB

Self-Evolving Development Loop

Execute an autonomous development cycle that dynamically generates, validates, and evolves its own execution strategy. Integrates with Meta-Engineering memory system for pattern learning and tool evolution.

Architecture Details: See docs/EVOLVING-LOOP-ARCHITECTURE.md


Usage

# Start new task
/evolving-loop "Your task description

Acceptance Criteria:
- [ ] Criterion 1
- [ ] Criterion 2
"

# Flags
/evolving-loop --resume    # Resume interrupted session
/evolving-loop --status    # Check status
/evolving-loop --force     # Clear and restart
/evolving-loop --evolve    # Trigger manual evolution
/evolving-loop --memory    # Show memory system status

How It Works

┌──────────────────────────────────────────────────────┐
│  8-Phase Self-Evolving Loop                          │
├──────────────────────────────────────────────────────┤
│                                                      │
│  Phase -2: CONTEXT_CHECK  → Check token pressure     │
│  Phase -1A: PATTERN_LOOKUP → Match task patterns     │
│                                                      │
│  ┌─────────────── Main Loop ───────────────┐        │
│  │ Phase 1: ANALYZE   → Extract AC         │        │
│  │ Phase 2: GENERATE  → Create skills      │        │
│  │ Phase 3: EXECUTE   → TDD implementation │        │
│  │ Phase 4: VALIDATE  → Score 0-100        │        │
│  │ Phase 5: DECIDE    → SHIP/FIX/EVOLVE    │        │
│  │ Phase 6: LEARN     → Extract patterns   │        │
│  │ Phase 7: EVOLVE    → Improve skills     │        │
│  └──────────────────────────────────────────┘        │
│                                                      │
│  Phase -1C: EVOLUTION → Update memory (on SHIP)     │
│                                                      │
└──────────────────────────────────────────────────────┘

Execution

When user runs /evolving-loop "$ARGUMENTS":

1. Handle Flags

STATE_DIR=".self-evolving-loop"
MEMORY_DIR=".claude/memory/meta-engineering"
CHECKPOINT="$STATE_DIR/state/checkpoint.json"

# --status: Show current state
if [[ "$ARGUMENTS" == *"--status"* ]]; then
    /evolving-status
    exit 0
fi

# --memory: Show memory system status
if [[ "$ARGUMENTS" == *"--memory"* ]]; then
    echo "Memory System Status:"
    if [ -d "$MEMORY_DIR" ]; then
        echo "Tool Usage: $(jq '.tools | length' "$MEMORY_DIR/tool-usage.json" 2>/dev/null || echo "0") tools"
        echo "Patterns: $(jq '.task_patterns | keys | length' "$MEMORY_DIR/patterns.json" 2>/dev/null || echo "0") patterns"
        echo "Evolution: v$(jq -r '.version' "$MEMORY_DIR/evolution.json" 2>/dev/null || echo "0")"
    else
        echo "(Not initialized - will create on first run)"
    fi
    exit 0
fi

# --resume: Continue from checkpoint
if [[ "$ARGUMENTS" == *"--resume"* ]]; then
    if [ ! -f "$CHECKPOINT" ] || [ "$(jq -r '.status' "$CHECKPOINT")" == "idle" ]; then
        echo "No active session to resume."
        exit 1
    fi
fi

# --force: Clear old state
if [[ "$ARGUMENTS" == *"--force"* ]]; then
    rm -rf "$STATE_DIR/state/*" "$STATE_DIR/reports/*" "$STATE_DIR/generated-skills/*"
fi

2. Initialize (First-Run Safe)

# Create directories (first-run safe)
mkdir -p "$MEMORY_DIR"
mkdir -p "$STATE_DIR"/{state,reports,generated-skills,history,backups}

# Helper: Read JSON with fallback
read_json_safe() {
    local file="$1"
    local default="$2"
    if [ -f "$file" ]; then
        cat "$file" 2>/dev/null || echo "$default"
    else
        echo "$default"
    fi
}

# Detect first run
IS_FIRST_RUN=false
if [ ! -f "$MEMORY_DIR/patterns.json" ]; then
    IS_FIRST_RUN=true
    echo "📝 First run detected - initializing memory system..."
fi

# Initialize memory files if missing (see docs for full schema)

3. Delegate to Orchestrator

CRITICAL: Use context isolation - orchestrator runs in fork context.

Task(subagent_type="evolving-orchestrator", prompt="""
Request: $ARGUMENTS
Task Type: $TASK_TYPE (from pattern matching)

Execute phases in sequence, each in fork context.
Return only brief status updates (1 line per phase).
Store ALL detailed output in files.

Return format:
📊 CONTEXT: [OK/Warning] - [N]% usage
🔍 PATTERNS: Matched [type], [N] recommendations
✅ ANALYZE: [N] AC identified
✅ GENERATE: Created v[N] skills
🔄 EXECUTE: Iter [N] - [status]
✅ VALIDATE: Score [N]/100
➡️ DECIDE: [SHIP/FIX/EVOLVE]
""")

Output Example

🚀 Starting Self-Evolving Loop (Meta-Engineering v2.0)...

📊 CONTEXT: OK - 15% usage
🔍 PATTERNS: Matched 'auth', 3 recommendations
✅ ANALYZE: 5 acceptance criteria identified
✅ GENERATE: Created executor-v1, validator-v1, fixer-v1
🔄 EXECUTE: Iteration 1 - 4 files modified, 3/5 tests passing
✅ VALIDATE: Score 72/100
➡️ DECIDE: FIX (minor test failures)
🔄 EXECUTE: Iteration 2 - 2 files modified, 5/5 tests passing
✅ VALIDATE: Score 94/100
➡️ DECIDE: SHIP
📚 LEARN: 2 patterns identified
🧬 EVOLUTION: Updated memory
✅ SHIP: All criteria met!

📊 Summary: 2 iterations, 6 files changed, 5/5 AC complete

Phase Agents

PhaseAgentOutput File
ANALYZErequirement-analyzerreports/analysis.json
GENERATEskill-synthesizergenerated-skills/*.md
EXECUTE(generated executor)codebase changes
VALIDATE(generated validator)reports/validation.json
DECIDEcompletion-judgereports/decision.json
LEARNexperience-extractorreports/learning.json
EVOLVEskill-evolverevolved skills

State Files

.self-evolving-loop/          ← Session state (temporary)
├── state/checkpoint.json     ← Current state
├── reports/*.json            ← Phase outputs
├── generated-skills/*.md     ← Dynamic skills
└── history/*.jsonl           ← Event logs

.claude/memory/meta-engineering/  ← Persistent memory
├── tool-usage.json           ← Usage statistics
├── patterns.json             ← Learned patterns
└── evolution.json            ← Evolution history

Stop / Resume

# Stop after current phase
touch .self-evolving-loop/state/stop

# Resume later
/evolving-loop --resume

Related

Source

git clone https://github.com/claude-world/director-mode-lite/blob/main/skills/evolving-loop/SKILL.mdView on GitHub

Overview

The Self-Evolving Development Loop autonomously generates, validates, and evolves its own execution strategy. It integrates with a Meta-Engineering memory system to learn patterns and drive tool evolution, enabling continuous improvement. Built around an 8-phase loop, it analyzes, generates, executes, validates, and ships evolving skills with memory updates.

How This Skill Works

It runs an 8-Phase Self-Evolving Loop: CONTEXT_CHECK and PATTERN_LOOKUP precede the main cycle, then ANALYZE, GENERATE, EXECUTE, VALIDATE, DECIDE, LEARN, and EVOLVE drive skill evolution. On a successful ship, EVOLUTION updates memory, enabling ongoing pattern learning and tool evolution; flags like --status, --memory, --resume, and --force control flow and state.

When to Use It

  • You need an autonomous task processor that learns from outcomes and evolves its strategy over time.
  • A project requires pattern-based automation where patterns are not yet captured in memory.
  • Rapid adaptation is required as requirements change and tools evolve.
  • You want continuous improvement of execution strategies without manual reconfiguration.
  • Building scalable automation that can validate and ship improvements with memory-driven evolution.

Quick Start

  1. Step 1: /evolving-loop "Your task description with Acceptance Criteria: [ ]"
  2. Step 2: Use flags like --status, --memory, and --resume to control or inspect progress
  3. Step 3: Review results, memory updates, and decide if you should SHIP, FIX, or EVOLVE

Best Practices

  • Define explicit Acceptance Criteria for every task to enable objective VALIDATE scoring (0-100).
  • Use memory and pattern learning responsibly; review EVOLVE results before shipping.
  • Leverage --status and --memory flags to monitor progress and memory health during runs.
  • Treat SHIP as a meaningful event that triggers memory updates (EVOLUTION phase).
  • Lock down safety checks and fallback paths when DECIDE chooses FIX or EVOLVE.

Example Use Cases

  • An autonomous coding assistant that analyzes a bug, generates fixes, tests them (TDD), validates outcomes, ships improvements, and evolves its toolkit based on results.
  • A data workflow agent that identifies repeating data-cleaning patterns, generates new templates, executes pipelines, validates accuracy, and expands its pattern library over time.
  • A modular AI agent that discovers new tools, integrates them into its memory, and evolves its skillset to tackle broader task categories.
  • A CI/CD automation bot that learns from deployment feedback, updates its strategies, and ships refined deployment playbooks.
  • A research assistant that extracts patterns from experiments, generates hypotheses and methods, validates results, ships validated approaches, and evolves its memory.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers