Get the FREE Ultimate OpenClaw Setup Guide →

research-orchestration

npx machina-cli add skill a5c-ai/babysitter/research-orchestration --openclaw
Files (1)
SKILL.md
1.5 KB

Research Orchestration

Overview

Orchestrates 5-10 parallel research agents for comprehensive, multi-source research. Achieves up to 90% faster results compared to sequential research through concurrent execution.

Research Depths

DepthAgent CountUse Case
Shallow5Quick fact-finding, simple queries
Medium7Standard research, moderate complexity
Deep10Comprehensive analysis, complex queries

Process Flow

  1. Plan: Decompose query into independent sub-queries
  2. Dispatch: Run 5-10 research agents in parallel
  3. Synthesize: Merge findings, identify consensus and conflicts
  4. Validate: Cross-reference against codebase for accuracy

Sources

  • Codebase: source files, patterns, implementations
  • Documentation: README, JSDoc, inline comments
  • Configuration: package.json, tsconfig, CI/CD configs

Confidence Scoring

Overall confidence is a weighted average of individual agent confidence scores, adjusted by validation results. Below 70% triggers human review.

When to Use

  • /research [query] slash command
  • Before specification creation
  • When investigating unfamiliar parts of the codebase

Processes Used By

  • claudekit-research (primary consumer)

Source

git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/methodologies/claudekit/skills/research-orchestration/SKILL.mdView on GitHub

Overview

Orchestrates 5-10 parallel research agents for comprehensive, multi-source research. Achieves up to 90% faster results compared to sequential research through concurrent execution.

How This Skill Works

Plan the query by decomposing it into independent sub-queries, then Dispatch 5-10 agents in parallel. Synthesize findings to identify consensus and conflicts, then Validate results by cross-referencing against the codebase for accuracy. Overall confidence is a weighted average of agent scores, adjusted by validation, with below 70% triggering human review.

When to Use It

  • Use the /research [query] slash command
  • Before specification creation
  • When investigating unfamiliar parts of the codebase
  • When you need multi-source evidence from code, docs, and configurations
  • When faster results are needed compared to sequential research

Quick Start

  1. Step 1: Plan - decompose the query into independent sub-queries
  2. Step 2: Dispatch & Synthesize - run 5-10 agents in parallel and synthesize findings
  3. Step 3: Validate & Review - cross-reference against the codebase and assess confidence; trigger human review if needed

Best Practices

  • Plan and decompose the query into independent sub-queries before dispatch
  • Choose depth (Shallow/Medium/Deep) and agent count (5/7/10) based on task complexity
  • Always source from codebase, documentation, and configuration artifacts
  • Synthesize results to identify consensus and conflicts across sources
  • Validate findings with the codebase and trigger human review if overall confidence < 70%

Example Use Cases

  • Quick fact-finding on a simple function using Shallow depth with 5 agents
  • Standard research on a module using Medium depth with 7 agents
  • Deep, comprehensive analysis of a complex feature with 10 agents
  • Cross-referencing code and README/docs to confirm usage patterns
  • Investigating unfamiliar areas of the codebase during feature planning

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers