agentic-workflow
npx machina-cli add skill parcadei/Continuous-Claude-v3/agentic-workflow --openclawAgentic Workflow Pattern
Standard multi-agent pipeline for implementation tasks.
Architecture Principles
- Use
run_in_background: truefor all agents to keep main context minimal - Use
Tasktool (neverTaskOutput) to avoid receiving full agent transcripts - Agents write outputs to
.claude/cache/agents/<stage>/for injection into subsequent agents - Main conversation is pure orchestration — no heavy lifting, only coordination
Workflow Stages
1. Research Agent
Task(subagent_type="oracle", run_in_background=true, prompt="""
Query NIA Oracle (via /nia-docs skill) to verify approach and gather best practices.
Output to: .claude/cache/agents/oracle/<task>-research.md
""")
- Enforce NIA as the research layer
- Output: Research findings
2. Planning Agent
Task(subagent_type="plan-agent", run_in_background=true, prompt="""
Read: .claude/cache/agents/oracle/<task>-research.md
Use RP-CLI to analyze the target codebase section.
Generate implementation plan informed by research.
Output to: .claude/cache/agents/plan-agent/<task>-plan.md
""")
- Receives: Research agent output as context
- Output: Implementation plan
3. Validation Agent
Task(subagent_type="validate-agent", run_in_background=true, prompt="""
Read: .claude/cache/agents/plan-agent/<task>-plan.md
Read: .claude/cache/agents/oracle/<task>-research.md
Review plan against research findings and best practices.
Output to: .claude/cache/agents/validate-agent/<task>-validated.md
""")
- Reviews plan against research
- Output: Validated plan with amendments
4. Implementation Agent
Task(subagent_type="agentica-agent", run_in_background=true, prompt="""
Read: .claude/cache/agents/validate-agent/<task>-validated.md
Read: .claude/cache/agents/oracle/<task>-research.md
TDD approach: Write failing tests FIRST, then implement.
Run tests to verify.
Output summary to: .claude/cache/agents/implement-agent/<task>-implementation.md
""")
- Receives: Validated plan + research context
- TDD: Failing tests first
- Output: Implementation + tests
5. Review Agent
Task(subagent_type="review-agent", run_in_background=true, prompt="""
Read: .claude/cache/agents/implement-agent/<task>-implementation.md
Read: .claude/cache/agents/validate-agent/<task>-validated.md
Read: .claude/cache/agents/oracle/<task>-research.md
Cross-reference implementation against plan and research.
Run tests to confirm passing.
Output to: .claude/cache/agents/review-agent/<task>-review.md
""")
- Cross-references all artifacts
- Confirms tests pass
- Output: Review summary
Agent Progress Monitoring
# Watch for system reminders:
# "Agent a42a16e progress: 6 new tools used, 88914 new tokens"
# Poll for output files:
find .claude/cache/agents -name "*.md" -mmin -5
# Check task file size growth:
wc -c /tmp/claude/.../tasks/<id>.output
Stuck detection:
- Progress reminders stop arriving
- Task output file size stops growing
- Expected output file not created after reasonable time
Directory Structure
.claude/cache/agents/
├── oracle/
│ └── <task>-research.md
├── plan-agent/
│ └── <task>-plan.md
├── validate-agent/
│ └── <task>-validated.md
├── implement-agent/
│ └── <task>-implementation.md
└── review-agent/
└── <task>-review.md
Key Rules
- Never use TaskOutput - floods context with 70k+ token transcripts
- Always run_in_background=true - isolates agent context
- File-based handoff - each agent reads previous agent's output file
- Poll, don't block - check file system for outputs, don't wait
- TDD in implementation - failing tests first, then make them pass
Source
- Session 2026-01-01: SDK Phase 3 implementation using this pattern
Source
git clone https://github.com/parcadei/Continuous-Claude-v3/blob/main/.claude/skills/agentic-workflow/SKILL.mdView on GitHub Overview
Agentic Workflow defines a repeatable, file-based multi-agent pipeline for implementing tasks. It separates research, planning, validation, implementation, and review into distinct agents, whose outputs are stored in .claude/cache/agents for seamless orchestration.
How This Skill Works
Each task runs through five stages with dedicated subagents: oracle for research, plan-agent for planning, validate-agent for validation, agentica-agent for implementation, and review-agent for final review. All agents run in the background and write their outputs to stage-specific files under .claude/cache/agents, enabling pure orchestration without heavy lifting in the main context. A key rule is to never use TaskOutput, ensuring modular, sequential handoffs between stages.
When to Use It
- To structure complex implementation tasks with clear research, planning, and validation stages
- When you need auditability and reproducibility through file-based handoffs
- When you want a TDD-driven implementation flow with tests first
- To decouple tasks using dedicated subagents (oracle, plan-agent, validate-agent, agentica-agent, review-agent)
- When you need non-blocking orchestration and progress monitoring via cached artifacts
Quick Start
- Step 1: Trigger the Research Agent (oracle) to verify approach and write to .claude/cache/agents/oracle/<task>-research.md
- Step 2: Run the Planning Agent to generate a plan from the research output and save to .claude/cache/agents/plan-agent/<task>-plan.md
- Step 3: Execute Validation, Implementation (with TDD), and Review agents in sequence; each writes to its respective cache path and validates through tests
Best Practices
- Enforce run_in_background=true for all agents
- Store each stage output in its own .claude/cache/agents/<stage>/ path
- Never use TaskOutput; rely on per-stage markdown artifacts for handoffs
- Follow TDD in the implementation stage by writing failing tests first
- Have the Review Agent cross-reference all artifacts and confirm tests pass
Example Use Cases
- Implementing a new API endpoint by completing research, planning, validation, implementation, and review across agents
- Refactoring a module with documented decisions and sanity checks via staged outputs
- Integrating a third-party service with a guided, auditable multi-agent workflow
- Extending a data pipeline with research-backed best practices and validated plans
- Auditing a codebase through a structured, artifact-driven agent chain