Get the FREE Ultimate OpenClaw Setup Guide →

unified-review

Scanned
npx machina-cli add skill athola/claude-night-market/unified-review --openclaw
Files (1)
SKILL.md
7.0 KB

Table of Contents

Unified Review Orchestration

Intelligently selects and executes appropriate review skills based on codebase analysis and context.

Quick Start

# Auto-detect and run appropriate reviews
/full-review

# Focus on specific areas
/full-review api          # API surface review
/full-review architecture # Architecture review
/full-review bugs         # Bug hunting
/full-review tests        # Test suite review
/full-review all          # Run all applicable skills

Verification: Run pytest -v to verify tests pass.

When To Use

  • Starting a full code review
  • Reviewing changes across multiple domains
  • Need intelligent selection of review skills
  • Want integrated reporting from multiple review types
  • Before merging major feature branches

When NOT To Use

  • Specific review type known
    • use bug-review
  • Test-review
  • Architecture-only focus - use architecture-review
  • Specific review type known
    • use bug-review

Review Skill Selection Matrix

Codebase PatternReview SkillsTriggers
Rust files (*.rs, Cargo.toml)rust-review, bug-review, api-reviewRust project detected
API changes (openapi.yaml, routes/)api-review, architecture-reviewPublic API surfaces
Test files (test_*.py, *_test.go)test-review, bug-reviewTest infrastructure
Makefile/build systemmakefile-review, architecture-reviewBuild complexity
Mathematical algorithmsmath-review, bug-reviewNumerical computation
Architecture docs/ADRsarchitecture-review, api-reviewSystem design
General code qualitybug-review, test-reviewDefault review

Workflow

1. Analyze Repository Context

  • Detect primary languages from extensions and manifests
  • Analyze git status and diffs for change scope
  • Identify project structure (monorepo, microservices, library)
  • Detect build systems, testing frameworks, documentation

2. Select Review Skills

# Detection logic
if has_rust_files():
    schedule_skill("rust-review")
if has_api_changes():
    schedule_skill("api-review")
if has_test_files():
    schedule_skill("test-review")
if has_makefiles():
    schedule_skill("makefile-review")
if has_math_code():
    schedule_skill("math-review")
if has_architecture_changes():
    schedule_skill("architecture-review")
# Default
schedule_skill("bug-review")

Verification: Run pytest -v to verify tests pass.

3. Execute Reviews

  • Run selected skills concurrently
  • Share context between reviews
  • Maintain consistent evidence logging
  • Track progress via TodoWrite

4. Integrate Findings

  • Consolidate findings across domains
  • Identify cross-domain patterns
  • Prioritize by impact and effort
  • Generate unified action plan

Review Modes

Auto-Detect (default)

Automatically selects skills based on codebase analysis.

Focused Mode

Run specific review domains:

  • /full-review api → api-review only
  • /full-review architecture → architecture-review only
  • /full-review bugs → bug-review only
  • /full-review tests → test-review only

Full Review Mode

Run all applicable review skills:

  • /full-review all → Execute all detected skills

Quality Gates

Each review must:

  1. Establish proper context
  2. Execute all selected skills successfully
  3. Document findings with evidence
  4. Prioritize recommendations by impact
  5. Create action plan with owners

Deliverables

Executive Summary

  • Overall codebase health assessment
  • Critical issues requiring immediate attention
  • Review frequency recommendations

Domain-Specific Reports

  • API surface analysis and consistency
  • Architecture alignment with ADRs
  • Test coverage gaps and improvements
  • Bug analysis and security findings
  • Performance and maintainability recommendations

Integrated Action Plan

  • Prioritized remediation tasks
  • Cross-domain dependencies
  • Assigned owners and target dates
  • Follow-up review schedule

Modular Architecture

All review skills use a hub-and-spoke architecture with progressive loading:

  • pensive:shared: Common workflow, output templates, quality checklists
  • Each skill has modules/: Domain-specific details loaded on demand
  • Cross-plugin deps: imbue:evidence-logging, imbue:diff-analysis/modules/risk-assessment-framework

This reduces token usage by 50-70% for focused reviews while maintaining full capabilities.

Exit Criteria

  • All selected review skills executed
  • Findings consolidated and prioritized
  • Action plan created with ownership
  • Evidence logged per structured output format

Supporting Modules

Troubleshooting

Common Issues

If the auto-detection fails to identify the correct review skills, explicitly specify the mode (e.g., /full-review rust instead of just /full-review). If integration fails, check that TodoWrite logs are accessible and that evidence files were correctly written by the individual skills.

Source

git clone https://github.com/athola/claude-night-market/blob/master/plugins/pensive/skills/unified-review/SKILL.mdView on GitHub

Overview

Unified-review orchestrates multiple review types to provide a general, cross-domain assessment. It analyzes context, selects appropriate sub-skills, and produces integrated reporting. It should be used when a specific review type is not known and a full multi-domain review is desired.

How This Skill Works

The skill uses orchestrated sub-skills (e.g., rust-review, api-review, architecture-review, bug-review, test-review) and tools like skill-selector, context-analyzer, and report-integrator to determine which reviews to run, execute them, and then aggregate findings into a single, coherent report. It supports auto-detect, full-review, and focused-review usage patterns to adapt the workflow to the codebase context.

When to Use It

  • Starting a full code review
  • Reviewing changes across multiple domains
  • Need intelligent selection of review skills
  • Want integrated reporting from multiple review types
  • Before merging major feature branches

Quick Start

  1. Step 1: Auto-detect and run appropriate reviews with /full-review
  2. Step 2: Focus on areas with commands like /full-review api, /full-review architecture, /full-review bugs, /full-review tests, or /full-review all
  3. Step 3: Verify outcomes by running tests (e.g., pytest -v) and reviewing the integrated report

Best Practices

  • Use unified-review when you don’t know which specific review applies
  • Run a full-review to cover multiple domains and generate an integrated report
  • Rely on the integrated output to surface cross-domain issues
  • Auto-detect changes and select the relevant sub-skills automatically
  • If a specific review type is known, use the specialized skill instead (e.g., bug-review, api-review, architecture-review)

Example Use Cases

  • A monorepo with API changes, architecture updates, and tests requiring cross-domain validation
  • A Rust project that also touches APIs and documentation; unified-review selects rust-review, api-review, and architecture-review
  • Before merging a feature branch that touches backend services and the test suite; integrated reporting helps governance
  • An ADR-driven project where multiple domains (architecture and API) need joint assessment
  • A multi-repo project where changes span code, tests, and Makefiles requiring cross-domain insights

Frequently Asked Questions

Add this skill to your agents

Related Skills

Sponsor this space

Reach thousands of developers