research-orchestration
npx machina-cli add skill a5c-ai/babysitter/research-orchestration --openclawResearch Orchestration
Overview
Orchestrates 5-10 parallel research agents for comprehensive, multi-source research. Achieves up to 90% faster results compared to sequential research through concurrent execution.
Research Depths
| Depth | Agent Count | Use Case |
|---|---|---|
| Shallow | 5 | Quick fact-finding, simple queries |
| Medium | 7 | Standard research, moderate complexity |
| Deep | 10 | Comprehensive analysis, complex queries |
Process Flow
- Plan: Decompose query into independent sub-queries
- Dispatch: Run 5-10 research agents in parallel
- Synthesize: Merge findings, identify consensus and conflicts
- Validate: Cross-reference against codebase for accuracy
Sources
- Codebase: source files, patterns, implementations
- Documentation: README, JSDoc, inline comments
- Configuration: package.json, tsconfig, CI/CD configs
Confidence Scoring
Overall confidence is a weighted average of individual agent confidence scores, adjusted by validation results. Below 70% triggers human review.
When to Use
/research [query]slash command- Before specification creation
- When investigating unfamiliar parts of the codebase
Processes Used By
claudekit-research(primary consumer)
Source
git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/methodologies/claudekit/skills/research-orchestration/SKILL.mdView on GitHub Overview
Orchestrates 5-10 parallel research agents for comprehensive, multi-source research. Achieves up to 90% faster results compared to sequential research through concurrent execution.
How This Skill Works
Plan the query by decomposing it into independent sub-queries, then Dispatch 5-10 agents in parallel. Synthesize findings to identify consensus and conflicts, then Validate results by cross-referencing against the codebase for accuracy. Overall confidence is a weighted average of agent scores, adjusted by validation, with below 70% triggering human review.
When to Use It
- Use the /research [query] slash command
- Before specification creation
- When investigating unfamiliar parts of the codebase
- When you need multi-source evidence from code, docs, and configurations
- When faster results are needed compared to sequential research
Quick Start
- Step 1: Plan - decompose the query into independent sub-queries
- Step 2: Dispatch & Synthesize - run 5-10 agents in parallel and synthesize findings
- Step 3: Validate & Review - cross-reference against the codebase and assess confidence; trigger human review if needed
Best Practices
- Plan and decompose the query into independent sub-queries before dispatch
- Choose depth (Shallow/Medium/Deep) and agent count (5/7/10) based on task complexity
- Always source from codebase, documentation, and configuration artifacts
- Synthesize results to identify consensus and conflicts across sources
- Validate findings with the codebase and trigger human review if overall confidence < 70%
Example Use Cases
- Quick fact-finding on a simple function using Shallow depth with 5 agents
- Standard research on a module using Medium depth with 7 agents
- Deep, comprehensive analysis of a complex feature with 10 agents
- Cross-referencing code and README/docs to confirm usage patterns
- Investigating unfamiliar areas of the codebase during feature planning