team-coordination
Scannednpx machina-cli add skill lklimek/claudius/team-coordination --openclawTeam Coordination
You are the leader. Delegate to specialist agents when tasks benefit from their expertise.
Skills Distribution
Preloaded skills are declared in agent frontmatter and available automatically.
| Skill | Preloaded On |
|---|---|
git-and-github | claudius |
severity | claudius, project-reviewer, security-engineer, developer-bilby |
coding-best-practices | developer-bilby, project-reviewer |
security-best-practices | security-engineer, architect, devops-engineer, qa-engineer |
rust-best-practices | architect |
On-demand skills are invoked directly or requested in agent prompts when they match.
Delegation Guidelines
- Brief agents like a magnificently impatient commander. Clear about needs, no hand-holding.
- Narrate progress to the user with personality.
- Synthesize specialist results — translate jargon into Claudius-grade commentary.
- When the task is straightforward, just do it yourself.
Spawning Approaches
- Standalone Tasks: Fire-and-forget. Each agent runs independently, writes results to a file. Best for parallel work without coordination.
- Teams (TeamCreate + SendMessage): Coordinated work with shared task lists. Best when agents need to communicate or hand off work.
General rules:
- Spawn all independent agents in parallel in a single message.
- Use
model: "opus"for deep analysis (security audits, architecture reviews, complex debugging). - For very large tasks, use
run_in_background: trueand check results later.
Agent Prompt Requirements
Agent prompts must be explicit and self-contained — agents do not see conversation history. Every prompt MUST include:
- Role and scope: what to do, which files, what to focus on
- File list: explicit list of files or glob patterns
- Output format: structure, severity levels, where to write results
- Constraints: what NOT to do
For tasks comparing against a baseline, also include:
- Comparison base: how to see what changed (
git diff,git show)
Skills and Checklists
Predefined agents get their frontmatter skills preloaded automatically — use the right subagent_type. Only embed checklist content directly for ad-hoc Task agents without a predefined type.
Worktree Lifecycle
Code-writing agents use isolation: worktree. After each wave — once all agents finish and branches are merged — prune completed worktrees (git worktree prune). Never remove worktrees with unmerged work.
Scaling for Large Scope
For large tasks (50+ files, 5000+ lines), spawn multiple agents of the same type with different file scopes. One agent reviewing 300+ files produces shallow results. Split by package, module, or layer:
- 2×
claudius:security-engineer— one for data layer, one for API layer - 2×
claudius:project-reviewer— split by package/module
Output Conventions
For standalone Task agents: each writes output to a unique file. Create a session temp dir once with mktemp -d /tmp/claude/XXXXXX and reuse it. Standard pattern: <tmpdir>/<agent-name>-report.md.
For team-based agents: use SendMessage to report results.
Each agent should report back list of skills it used. When multiple agents deliver the same results, calculate and report redundancy ratio.
External Plugin Dependencies
| Plugin | Source | Benefits for |
|---|---|---|
rust-analyzer-lsp | claude-plugins-official | developer-bilby — LSP diagnostics, go-to-definition, type inference for Rust |
Stuck Agent Recovery
If a teammate idles without producing output, rephrase the prompt and resend with model: "opus". If the retry also fails, shut it down and do the work yourself.
Anti-Patterns
- Vague prompts: be explicit about files, focus areas, and output format.
- Single agent for large scope: split across multiple agents by file scope.
- Forgetting agent skills: use the right
subagent_typeto get preloaded skills. - No output location: always tell standalone agents where to write results.
Source
git clone https://github.com/lklimek/claudius/blob/main/skills/team-coordination/SKILL.mdView on GitHub Overview
Team Coordination is the orchestration layer for Claudius. As the leader, you delegate tasks to specialist agents before spawning or forming teams, using clear prompts, parallel spawning, and synthesized results to drive complex work efficiently. This skill helps scale tasks, maintain clarity, and ensure results are cohesive across multiple agents.
How This Skill Works
Act as the orchestration hub: decide between standalone tasks and team-based collaboration, leverage preloaded and on-demand skills, and craft explicit prompts that define role, file lists, and output formats. Spawn all independent agents in parallel in a single message, optionally using model opus for deep analysis, and employ run_in_background for very large tasks. For team work, use SendMessage to report results and aggregate a shared task list, then synthesize specialist findings into Claudius-grade commentary.
When to Use It
- You need to leverage specialist expertise (e.g., security, Rust practices, or architecture) before executing work.
- Coordinating multiple agents on a shared task list where handoffs and communication are required.
- Handling a large-scale task that should be partitioned by scope (files, modules, or packages).
- You want parallel execution with minimal hand-holding and a clear, narratable progress trail.
- You must produce a unified report that synthesizes diverse specialist results for the user.
Quick Start
- Step 1: Define the task scope, target files/modules, and success criteria; choose between Standalone Tasks or Teams.
- Step 2: Prepare prompts with explicit role, file lists, output format, and constraints; preload required skills.
- Step 3: Spawn agents in parallel (TeamCreate + SendMessage for teams), then collect and synthesize results into a final report.
Best Practices
- Write explicit, self-contained prompts that define role/scope, file lists, output format, and constraints.
- Spawn independent agents in parallel within a single message; reserve team-based spawning for tasks requiring communication.
- Match skills to tasks using preloaded frontmatter and on-demand invocations; specify subagent_type when possible.
- Report progress with personality and translate jargon into clear, actionable commentary.
- Use proper worktree and lifecycle management: isolation: worktree for code tasks, prune after waves, and avoid removing unmerged work.
Example Use Cases
- Coordinate a security-audit by delegating to security-engineer and architect, then synthesize findings into a cohesive report.
- Split a 50-file refactor into multiple agents by module, then merge results and compare changes with git diff.
- Run a multi-agent code review using TeamCreate + SendMessage, gathering per-agent results and calculating redundancy.
- Orchestrate a data-migration task with parallel data-layer and API-layer agents, reporting back a unified migration plan.
- Conduct an architecture-design review with a deep-analysis pass (model: opus) and consolidate recommendations.