Delegate
Verified@ivangdavila
npx machina-cli add skill @ivangdavila/delegate --openclawCore Rule
Spawn cost < task cost → delegate. Otherwise, do it yourself.
Model Tiers
| Tier | Models | Cost | Use for |
|---|---|---|---|
| Small | Haiku, GPT-4o-mini, Gemini Flash | ~$0.25/1M | Search, summarize, format, classify |
| Medium | Sonnet, GPT-4o, Gemini Pro | ~$3/1M | Code, analysis, synthesis |
| Large | Opus, o1, Gemini Ultra | ~$15/1M | Architecture, complex reasoning |
Rule of thumb: Start with smallest tier. Escalate only if output quality insufficient.
Spawn Checklist
Every spawn must include:
1. TASK: Single clear deliverable (not "help with X")
2. MODEL: Explicit tier choice
3. CONTEXT: Only files/info needed (never full history)
4. OUTPUT: Expected format ("return JSON with...", "write to file X")
5. DONE: How to signal completion
Check templates.md for copy-paste spawn templates.
Error Recovery
| Error Type | Action |
|---|---|
| Sub-agent timeout (>5 min no response) | Kill and retry once |
| Wrong output format | Retry with stricter instructions |
| Task too complex for tier | Escalate: Small→Medium→Large |
| Repeated failures (3x) | Abort, report to user |
Check errors.md for recovery patterns and escalation logic.
Verification
Never trust "done" without checking:
- Code: Run tests, check syntax
- Files: Verify they exist and have content
- Data: Spot-check 2-3 items
- Research: Confirm sources exist
Don't Delegate
- Quick tasks (<30 seconds to do yourself)
- Tasks needing conversation context
- Anything requiring user clarification mid-task
Overview
Delegate routes tasks to sub-agents using tiered model selection to balance cost and quality. It enforces a core rule: spawn cost < task cost → delegate; otherwise do it yourself. It includes a spawn checklist, error recovery patterns, verification steps, and guidance on when not to delegate.
How This Skill Works
Choose a tier (Small, Medium, Large) based on task complexity and cost, and spawn a sub-agent only if the tier's cost satisfies the core rule. Each spawn must include TASK, MODEL, CONTEXT, OUTPUT, and DONE, per the Spawn Checklist, and you should reference templates.md for copy-paste templates. If errors occur (timeouts, wrong format, or overly complex tasks), follow the Error Recovery rules and escalate from Small to Medium to Large as needed, then verify results using the verification steps (code, files, data, and research).
When to Use It
- Use Small tier for routine text tasks (search, summarize, classify) to minimize cost.
- Delegate code, analysis, or synthesis tasks to Medium tier when higher quality or specialization is needed.
- Escalate to Large tier for architecture, complex reasoning, or tasks that require deep expertise.
- When a clearly defined deliverable must be produced and verifiable, with explicit OUTPUT and DONE signals.
- To increase throughput, parallelize by spawning multiple sub-tasks, then verify all results.
Quick Start
- Step 1: Assess the task cost vs the spawn cost and decide whether to delegate using the core rule.
- Step 2: Prepare the spawn payload with TASK, MODEL, CONTEXT, OUTPUT, DONE and reference templates.md.
- Step 3: Spawn the sub-agent, monitor progress, apply error recovery if needed, and run verification before completion.
Best Practices
- Always start with the Small tier; escalate only if output quality is insufficient.
- Include a precise TASK, explicit MODEL (tier), CONTEXT, OUTPUT format, and DONE signal in every spawn.
- Use the Spawn Checklist and reference templates.md for consistency and clarity.
- Apply the Error Recovery table: retry on timeouts, escalate on complex tasks, and verify outputs thoroughly.
- Run the Verification steps (code, files, data, research) before signaling completion.
Example Use Cases
- Delegate a 2-page data summary to a Small-tier sub-agent and verify that all items are present in the JSON output.
- Delegate a code analysis task to the Medium tier, run tests, and confirm results match expectations.
- Delegate a system architecture evaluation to the Large tier, review reasoning, and cross-check sources.
- Delegate dataset cleaning to a sub-agent, sample 2-3 items to ensure quality, and confirm changes exist in files.
- Delegate multiple quick sub-tasks in parallel to meet a tight deadline, then verify each result individually.