Get the FREE Ultimate OpenClaw Setup Guide →

agent-orchestration

npx machina-cli add skill parcadei/Continuous-Claude-v3/agent-orchestration --openclaw
Files (1)
SKILL.md
1.6 KB

Agent Orchestration Rules

When the user asks to implement something, use implementation agents to preserve main context.

The Pattern

Wrong - burns context:

Main: Read files → Understand → Make edits → Report
      (2000+ tokens consumed in main context)

Right - preserves context:

Main: Spawn agent("implement X per plan")
      ↓
Agent: Reads files → Understands → Edits → Tests
      ↓
Main: Gets summary (~200 tokens)

When to Use Agents

Task TypeUse Agent?Reason
Multi-file implementationYesAgent handles complexity internally
Following a plan phaseYesAgent reads plan, implements
New feature with testsYesAgent can run tests
Single-line fixNoFaster to do directly
Quick config changeNoOverhead not worth it

Key Insight

Agents read their own context. Don't read files in main chat just to understand what to pass to an agent - give them the task and they figure it out.

Example Prompt

Implement Phase 4: Outcome Marking Hook from the Artifact Index plan.

**Plan location:** thoughts/shared/plans/2025-12-24-artifact-index.md (search for "Phase 4")

**What to create:**
1. TypeScript hook
2. Shell wrapper
3. Python script
4. Register in settings.json

When done, provide a summary of files created and any issues.

Trigger Words

When user says these, consider using an agent:

  • "implement", "build", "create feature"
  • "follow the plan", "do phase X"
  • "use implementation agents"

Source

git clone https://github.com/parcadei/Continuous-Claude-v3/blob/main/.claude/skills/agent-orchestration/SKILL.mdView on GitHub

Overview

Agent Orchestration Rules define how to use implementation agents to preserve the main context when a user asks for an implementation. It emphasizes spawning an agent to perform reading, understanding, editing, and testing, then returning a concise summary to the main context. This helps manage complexity and maintain context across multi-file changes.

How This Skill Works

When a task is requested, the system spawns an agent (e.g., implement X per plan). The agent reads relevant files, understands the requirements, makes edits, and runs tests. After completion, the main context receives a summary (about ~200 tokens) of what was done and any issues encountered.

When to Use It

  • Multi-file implementation: use an agent to manage complexity across files.
  • Following a plan phase: have the agent read and implement according to a plan.
  • New feature with tests: agents can implement features and run tests.
  • Single-line fix: avoid agent overhead for speed; do it directly.
  • Quick config change: overhead of agents may not be worth it for small tweaks.

Quick Start

  1. Step 1: Spawn an implementation agent with the task (e.g., agent("implement X per plan")).
  2. Step 2: Let the agent read files, understand requirements, edit code, and run tests.
  3. Step 3: Retrieve the main summary (~200 tokens) detailing what was created and any issues.

Best Practices

  • Use agents for multi-file implementations to manage internal complexity.
  • Always have agents read and follow an explicit plan phase rather than passing raw instructions in the main chat.
  • Leverage agents for features that require tests, so edits and test runs are handled automatically.
  • For quick, simple changes (single-line fixes or config tweaks), perform the task directly to avoid overhead.
  • Require and review a concise summary of results (and any issues) after the agent completes.

Example Use Cases

  • Example: Implement Phase 4: Outcome Marking Hook from the Artifact Index plan. Plan location: thoughts/shared/plans/2025-12-24-artifact-index.md (search for 'Phase 4'). What to create: 1) TypeScript hook 2) Shell wrapper 3) Python script 4) Register in settings.json. When done, provide a summary of files created and any issues.
  • Example: Spawn agent("implement X per plan"); Agent reads files → understands → edits → tests; Main receives a ~200-token summary.
  • Example: Following a plan phase: Agent reads the plan, implements according to said plan, and returns progress and any blockers to the main context.
  • Example: New feature with tests: Agent creates code, adds tests, runs test suite, and reports back with test results and coverage notes.
  • Example: Quick config change: For a tiny config tweak, skip the orchestration overhead and apply directly in the main chat to save time.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers