delegate
Scannednpx machina-cli add skill bitflight-devops/hallucination-detector/delegate --openclawDelegation Template
Workflow Reference: See Multi-Agent Orchestration for complete delegation flow with DONE/BLOCKED signaling.
Step 1: Analyze the task. Do you have the "WHERE, WHAT, WHY"?
Step 2: Construct the prompt using the template below.
Template
Your ROLE_TYPE is sub-agent.
[Task Identification - one sentence]
OBSERVATIONS (Factual only):
- [Verbatim error messages]
- [Exact file:line references]
- [Environment state]
- [NO interpretations or "I think"]
DEFINITION OF SUCCESS (The "WHAT"):
- [Specific measurable outcome]
- [Acceptance criteria]
- [Verification method]
CONTEXT (The "WHERE" & "WHY"):
- Location: [Where to look]
- Scope: [Boundaries]
- Constraints: [Hard requirements vs Preferences]
AVAILABLE RESOURCES:
- [List available MCP tools]
- [Reference docs with @filepath]
YOUR TASK:
1. Run /verify (as completion criteria guide)
2. Perform comprehensive context gathering
3. Form hypothesis → Experiment → Verify
4. Implement solution
5. Only report completion after /verify criteria are met
Delegation Rules
Check before sending:
| Rule | Check |
|---|---|
| Formula | Delegation = Observations + Success Criteria + Resources - Assumptions - Micromanagement |
| No HOW | Do NOT tell agent how to implement (e.g., "Change line 42 to X") |
| Constraints OK | DO tell agent constraints (e.g., "Must use the 'requests' library") |
| No Assumptions | Do NOT say "The issue is probably..." |
| Full Scope | If code smell found, instruct agent to audit entire pattern, not single instance |
Quick Checklist
- Starts with
Your ROLE_TYPE is sub-agent. - Contains only factual observations
- No assumptions stated as facts
- Defines WHAT and WHY, not HOW
- Lists resources without prescribing tools
Source
git clone https://github.com/bitflight-devops/hallucination-detector/blob/main/.claude/skills/delegate/SKILL.mdView on GitHub Overview
A quick delegation template to assign work to sub-agents using the WHERE-WHAT-WHY framework. It helps you craft precise prompts before invoking the Task tool and supports preparing prompts for specialized agents. For deeper delegation guidance, this skill links to the agent-orchestration how-to-delegate guidance.
How This Skill Works
You start by analyzing the task to confirm the WHERE, WHAT, and WHY. Then you construct a prompt using the provided template with sections for observations, success criteria, context, and resources. The resulting prompt directs the sub-agent to run verification steps and report completion only after the verify criteria are met.
When to Use It
- When you need to assign work to a sub-agent.
- Before invoking the Task tool to ensure clear prompts.
- When preparing prompts for specialized agents.
- When you must collect factual observations and define measurable success.
- When orchestrating multi-agent workflows requiring WHERE-WHAT-WHY.
Quick Start
- Step 1: Your ROLE_TYPE is sub-agent.
- Step 2: Analyze the task for WHERE, WHAT, and WHY.
- Step 3: Build the prompt with Observations, Definition of Success, Context, and Resources, then plan YOUR TASK steps and run /verify.
Best Practices
- Start prompts with Your ROLE_TYPE is sub-agent.
- Keep Observations factual and verbatim.
- Define WHAT and WHY before giving HOW.
- List AVAILABLE RESOURCES without prescribing tools.
- Ensure measurable, verifiable acceptance criteria and clear success.
Example Use Cases
- Delegate a bug fix task to a sub-agent, collect exact error messages and file references, and verify completion via /verify.
- Delegate data extraction from logs using the template, including exact messages and environment state.
- Prepare a prompt for a specialized NLP agent to summarize a document with defined scope and constraints.
- Audit a code pattern by instructing the agent to review the entire pattern when a code smell is found.
- Coordinate a multi-agent task where one sub-agent gathers context and another implements the solution, validating results after /verify.