code-review-gate
Scannednpx machina-cli add skill a5c-ai/babysitter/code-review-gate --openclawCode Review Gate
Capabilities
Performs architect-level code review enforcing four core principles: DRY (no unnecessary duplication), YAGNI (no speculative features), proper abstraction (correct encapsulation), and test coverage (adequate automated tests). Provides numeric quality scores and specific file:line feedback.
Tool Use Instructions
- Use Read to examine code changes and test files
- Use Grep to search for duplication patterns and anti-patterns
- Use Glob to verify test file coverage
- Use Bash to run lint, test, and coverage commands
- Use Write to generate review reports
Process Integration
- Used in
maestro-orchestrator.jsPhase 4 (Architect Code Review) - Used in
maestro-development.js(PR Review Cycle) - Used in
maestro-hotfix.js(Expedited Review) - Maps to tasks:
maestro-architect-code-review,maestro-dev-architect-review,maestro-hotfix-review - Agents: Architect, Code Reviewer
- Quality convergence loop: rejected code returns to coder for fixes
- Checks are "turned up to 11" by default
Source
git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/methodologies/maestro/skills/code-review-gate/SKILL.mdView on GitHub Overview
Code Review Gate enforces four core software design principles during architect-level reviews: DRY to avoid duplication, YAGNI to prevent speculative features, proper abstraction for clean encapsulation, and adequate test coverage. It provides numeric quality scores and precise file:line feedback to guide fixes and raise code quality across the codebase.
How This Skill Works
The skill analyzes code changes using Read, searches for duplication patterns and anti-patterns with Grep, and verifies test coverage via Glob. Bash runs lint, tests, and coverage checks, while Write generates a structured review report that includes a numeric quality score and file:line feedback. Checks are tuned to a high standard (default 11) and integrated into maestro's architecture-review workflow.
When to Use It
- During maestro Phase 4 (Architect Code Review) of a PR
- When a change spans multiple modules and risks duplication or leakage of concerns
- When proposed features are speculative and may violate YAGNI
- When test coverage for modified areas is insufficient or missing
- During expedited hotfix reviews that still require thorough architectural scrutiny
Quick Start
- Step 1: Use Read to inspect code changes and related tests
- Step 2: Run Bash to lint, run tests, and measure coverage; use Grep and Glob to assist
- Step 3: Use Write to produce the review report with scores and precise file:line feedback
Best Practices
- Start with a DRY pass to identify cross-module duplication
- Assess abstractions for correct encapsulation and minimal leakage
- Require or increase automated tests to match the changes
- Provide precise file:line feedback with concrete, actionable fixes
- Iterate the review until quantitative quality thresholds are met
Example Use Cases
- Refactoring redundant utility functions into a single shared abstraction to eliminate duplication across services
- Introducing a well-scoped abstraction layer to replace ad-hoc data access patterns
- Removing a speculative feature that adds risk without current business value
- Adding missing tests to cover newly changed paths and edge cases
- Delivering targeted file:line feedback that directs a coder to the root cause at the exact location