code-review-orchestration
npx machina-cli add skill a5c-ai/babysitter/code-review-orchestration --openclawCode Review Orchestration
Overview
Orchestrates 6 specialized review agents running in parallel across independent dimensions. Each agent scores independently, and results are aggregated into a weighted final score with a clear recommendation.
Six Dimensions
Architecture (weight: 20%)
Module boundaries, dependency direction, design pattern adherence, architectural drift.
Security (weight: 25%)
Injection vulnerabilities, auth, secrets, crypto, dependencies, headers.
Performance (weight: 15%)
Algorithmic complexity, resource leaks, database patterns, caching, async.
Testing (weight: 15%)
Coverage, quality, edge cases, isolation, flakiness, integration, error paths.
Quality (weight: 15%)
Naming, readability, error handling, type safety, DRY, comments.
Documentation (weight: 10%)
JSDoc, README updates, changelog, inline comments, type documentation.
Scoring and Recommendation
- APPROVE: overall >= 80 AND zero critical issues
- REQUEST_CHANGES: overall >= 60 OR has critical issues
- REJECT: overall < 60
When to Use
/code-reviewslash command- Post-implementation review in spec execution
- Pre-merge quality gate
Agents Used
code-review-coordinator,security-analyst,performance-analysttesting-specialist,architecture-reviewer
Processes Used By
claudekit-code-review(primary consumer)claudekit-orchestrator(via command dispatch)
Source
git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/methodologies/claudekit/skills/code-review-orchestration/SKILL.mdView on GitHub Overview
Code Review Orchestration runs six specialized review agents in parallel across architecture, security, performance, testing, quality, and documentation. Each agent scores its dimension independently, and results are aggregated into a weighted final score with a clear recommendation.
How This Skill Works
Six agents—code-review-coordinator, security-analyst, performance-analyst, testing-specialist, architecture-reviewer, and a quality/documentation-oriented reviewer—evaluate their respective dimensions in parallel. The claudekit-code-review workflow coordinates execution while claudekit-orchestrator aggregates the scores into a weighted final score and a formal recommendation (APPROVE, REQUEST_CHANGES, or REJECT).
When to Use It
- Trigger the orchestration with the /code-review command
- Perform a post-implementation review in spec execution
- Use as a pre-merge quality gate
- Run for release readiness checks requiring multi-dimension scoring
- Apply to architecture/security-sensitive projects needing cross-dimension evaluation
Quick Start
- Step 1: Trigger the orchestration via the claudekit-enabled /code-review command
- Step 2: Six agents evaluate Architecture, Security, Performance, Testing, Quality, and Documentation in parallel
- Step 3: Review the weighted final score and recommendation (APPROVE, REQUEST_CHANGES, or REJECT) and act accordingly
Best Practices
- Define and lock the six dimension weights and the critical-issues gating before running
- Give each agent a clear scope, acceptance criteria, and DT (definition of done)
- Run all agents in parallel to minimize review time and rely on the aggregated results
- Ensure inputs are consistent (code changes, tests, and documentation) across dimensions
- Document decisions and update changelog/README aligned with the Documentation dimension
Example Use Cases
- Reviewing a new microservice with architecture changes and cross-service dependencies
- Patching a security vulnerability across dependencies with a security-analyst review
- Optimizing a data-intensive operation for performance and caching strategy
- Expanding test coverage and improving test isolation across modules
- Updating API documentation and inline code comments to reflect changes