loop-fixer
Scannednpx machina-cli add skill Ibrahim-3d/conductor-orchestrator-superpowers/loop-fixer --openclawLoop Fixer Agent — Step 5: FIX
Handles the loop-back when an evaluation fails. Takes the evaluator's failure report, converts it into fix tasks, executes them, and hands back to the evaluator for re-check.
Inputs Required
- Evaluation Report — from either
loop-plan-evaluatororloop-execution-evaluator - Track's
plan.md— to add fix tasks - Track's
spec.md— to verify fixes align with requirements
Workflow
1. Parse Evaluation Failures
Read the evaluation report and extract:
- Which passes failed (scope, overlap, deliverables, build, quality, etc.)
- Specific fix instructions from the evaluator
- Severity of each issue
2. Create Fix Tasks in plan.md
Add a "Fix Phase" section to plan.md:
## Fix Phase (from Evaluation on [date])
### Issues to Fix
Source: [loop-plan-evaluator / loop-execution-evaluator] report
- [ ] Fix 1: [Specific action from evaluator]
- Issue: [What failed]
- Acceptance: [How to verify this is fixed]
- [ ] Fix 2: [Specific action]
- Issue: [What failed]
- Acceptance: [How to verify]
3. Execute Fixes
Follow the same protocol as loop-executor:
- Mark each fix
[~]when starting - Implement the fix
- Mark
[x]with commit SHA and summary when done - Commit after each fix
4. Verify Fixes Locally
Before handing back to evaluator, do a quick self-check:
- Does the fix address what the evaluator flagged?
- Did the fix introduce any new issues?
- Does the build still pass?
5. Request Re-Evaluation
## Fix Summary
**Fixes Completed**: [X]/[Y]
**Commits**: [list]
**Self-Check**: [PASS/CONCERNS]
**Ready for**: Re-evaluation → hand back to [loop-plan-evaluator / loop-execution-evaluator]
Loop Mechanics
The fix cycle continues until the evaluator returns PASS:
FAIL → Fixer creates fix tasks → Fixer executes → Evaluator re-checks
│ │
│ PASS → Done ✅
│ FAIL → loop again
└──────────────────────────────────────────────┘
Guardrails
- Max 3 fix cycles — if still failing after 3 rounds, escalate to user
- Scope guard — fixes must address evaluator's specific issues, not add new features
- plan.md always updated — every fix task gets marked
[x]with summary
Metadata Checkpoint Updates
The fixer MUST update the track's metadata.json at key points:
On Start
{
"loop_state": {
"current_step": "FIX",
"step_status": "IN_PROGRESS",
"step_started_at": "[ISO timestamp]",
"fix_cycle_count": 1,
"checkpoints": {
"FIX": {
"status": "IN_PROGRESS",
"started_at": "[ISO timestamp]",
"agent": "loop-fixer",
"cycle": 1,
"fixes_applied": [],
"fixes_remaining": ["Fix 1", "Fix 2", "Fix 3"]
}
}
}
}
After Each Fix
{
"loop_state": {
"checkpoints": {
"FIX": {
"status": "IN_PROGRESS",
"fixes_applied": [
{ "issue": "Lock propagation broken", "fix": "Updated cascade logic", "commit_sha": "abc1234" }
],
"fixes_remaining": ["Fix 2", "Fix 3"]
}
}
}
}
On Completion (Ready for Re-evaluation)
{
"loop_state": {
"current_step": "EVALUATE_EXECUTION",
"step_status": "NOT_STARTED",
"checkpoints": {
"FIX": {
"status": "PASSED",
"completed_at": "[ISO timestamp]",
"cycle": 1,
"fixes_applied": [
{ "issue": "Lock propagation broken", "fix": "Updated cascade logic", "commit_sha": "abc1234" },
{ "issue": "Missing test coverage", "fix": "Added unlock tests", "commit_sha": "def5678" }
],
"fixes_remaining": []
},
"EVALUATE_EXECUTION": {
"status": "NOT_STARTED"
}
}
}
}
Fix Cycle Management
fix_cycle_countinloop_statetracks total cycles across the track- Each FIX checkpoint's
cyclefield tracks which cycle number - If
fix_cycle_count >= 3: Escalate to user instead of continuing - On escalation:
{
"loop_state": {
"step_status": "BLOCKED",
"checkpoints": {
"FIX": {
"status": "BLOCKED"
}
}
},
"blockers": [{
"id": "blocker-1",
"description": "Fix cycle limit exceeded (3 cycles)",
"blocked_at": "[timestamp]",
"blocked_step": "FIX",
"status": "ACTIVE"
}]
}
Update Protocol
- Read current
metadata.json - Check
fix_cycle_count— if >= 3, escalate to user - Increment
fix_cycle_countat start - Update
fixes_appliedandfixes_remainingafter each fix - On completion: Set
current_stepback to the evaluator step - Write back to
metadata.json
Handoff
After fixes complete → Conductor dispatches the original evaluator agent to re-run:
- Plan fixes →
loop-plan-evaluator - Execution fixes →
loop-execution-evaluator
Source
git clone https://github.com/Ibrahim-3d/conductor-orchestrator-superpowers/blob/master/skills/loop-fixer/SKILL.mdView on GitHub Overview
Loop Fixer handles the loop-back when an evaluation fails. It ingests the evaluator’s failure report, converts it into concrete fix tasks in plan.md, executes the fixes, and hands control back to the evaluator for re-check. It manages the Evaluate-Loop flow and enforces guardrails to prevent endless cycles.
How This Skill Works
It parses the evaluation failure to identify failed passes and specific fix instructions, creates a Fix Phase in plan.md with actionable tasks, and then executes each fix while updating status. After fixes are applied, it runs a quick self-check and triggers a re-evaluation by the appropriate evaluator.
When to Use It
- When an evaluation returns FAIL
- When the evaluator issues a fix issues directive or provides a failure report
- When failures must be translated into concrete tasks in plan.md
- When fixes are ready to be executed and committed with traceable SHAs
- When preparing for a re-evaluation and looping back into the evaluator
Quick Start
- Step 1: Parse the Evaluation Report to identify failed passes and fix instructions
- Step 2: Create a Fix Phase section in plan.md with explicit Fix tasks and acceptance criteria
- Step 3: Execute each fix, update plan.md and metadata, and trigger re-evaluation
Best Practices
- Always map each fix to a specific evaluator instruction and failure detail
- Update plan.md with a dedicated Fix Phase and clear acceptance criteria
- Mark fixes as started [~] and completed [x] with commit SHA and summary
- Perform a quick local check (build, tests, and scan for new issues) before re-check
- Limit fix cycles to three and escalate if the evaluator does not PASS after the third round
Example Use Cases
- Fix a build failure caused by a missing dependency identified by the evaluator
- Resolve overlapping deliverables flagged during quality evaluation
- Address a data mismatch reported during plan evaluation
- Correct a timeout or flaky test reported by the execution evaluator
- Realign with spec requirements after a gap identified in verification pass