design-review
Scannednpx machina-cli add skill codybrom/clairvoyance/design-review --openclawDesign Review Orchestrator
When invoked with $ARGUMENTS, scope the entire review to the specified target. Read the target code first, then proceed through the phases below in order. This skill orchestrates other skills from Clairvoyance (https://clairvoyance.fyi). It works best when the full collection is installed.
This skill does not replace individual lenses. It sequences them into a diagnostic funnel that moves from broad to narrow, skipping work when early phases find nothing actionable.
Diagnostic Funnel
Phase 1: Complexity Triage
Apply complexity-recognition checks against the target.
- Identify the three symptoms: change amplification, cognitive load, unknown unknowns
- Trace any symptoms to root causes: dependencies or obscurity
- Weight findings by the complexity formula: high-traffic code first
This phase determines whether the target has measurable complexity problems. If it does, subsequent phases diagnose where.
Phase 2: Structural Review
Apply these lenses to the target's module-level architecture:
- module-boundaries: Are the boundaries drawn around knowledge domains or around steps in a process?
- deep-modules: Does each module provide powerful functionality behind a simple interface? Check for classitis, pass-through methods and shallow wrappers.
- abstraction-quality: Does each layer provide a genuinely different way of thinking, or do adjacent layers duplicate the same abstraction?
Focus on the modules that Phase 1 identified as highest-complexity. If Phase 1 found nothing, scan the largest or most-connected modules.
Phase 3: Interface Review
Apply these lenses to the interfaces exposed by the modules from Phase 2:
- information-hiding: Does the interface leak implementation details? Check for back-door leakage (shared knowledge not in any interface).
- general-vs-special: Does the interface mix general-purpose mechanisms with special-case knowledge? Check for boolean parameters serving one caller.
- pull-complexity-down: Are callers forced to handle complexity the module could absorb? Check for exposed edge cases, required configuration and exceptions that could be defined away.
- error-design: Are errors defined out of existence where possible? Check for catch-and-ignore, overexposed exceptions and error handling longer than the happy path.
Phase 4: Surface Review
Apply these lenses to naming and documentation:
- naming-obviousness: Do names create precise mental images? Check the isolation test: seen without context, could the name mean almost anything?
- comments-docs: Do comments capture what the code cannot say (intent, rationale, constraints)? Check for comments that repeat code and implementation details contaminating interface documentation.
Phase 5: Red Flags Sweep
Run the full red-flags 17-flag checklist against the target. Any flag triggered in Phases 1-4 will already be marked. This phase catches flags that earlier phases may not have surfaced (especially Process flags 15-17: No Alternatives Considered, Tactical Momentum, Catch-and-Ignore).
Early Termination
If Phase 1 finds no measurable complexity AND Phase 5 triggers zero flags, stop. Report the target as clean. Do not force findings where none exist.
Prioritization
Rank findings in this order:
- Syndrome clusters: Multiple flags pointing to the same root cause (e.g., information leakage + conjoined methods + repetition all stemming from one misplaced boundary). These indicate systemic issues. Fixing the root cause resolves all flags in the cluster.
- Boundary issues: Information leakage, module boundary problems and abstraction mismatches. These compound over time and infect adjacent code.
- Canary flags: Hard to Pick Name, Hard to Describe, Non-obvious Code, No Alternatives Considered. These are the cheapest signals. Catch them and the structural flags never materialize.
- Structural issues: Shallow modules, pass-through methods, classitis. These require refactoring but affect a bounded area.
- Surface issues: Naming and documentation problems. Important but lowest cost to fix and lowest risk if deferred.
Source
git clone https://github.com/codybrom/clairvoyance/blob/main/skills/design-review/SKILL.mdView on GitHub Overview
design-review orchestrates a structured evaluation by running Clairvoyance skills in a diagnostic funnel—from complexity triage through structural, interface, and surface checks to a red-flags sweep. It reviews a file, module, or PR to deliver a comprehensive, prioritized assessment of overall design quality rather than a single-lens check. It is not intended for applying one specific lens or for analyzing how code evolved over time.
How This Skill Works
It scopes the target (file, module, or PR), reads the code, and executes Phases 1 through 5 in order: Complexity Triage, Structural Review, Interface Review, Surface Review, and Red Flags Sweep. It orchestrates lenses such as complexity-recognition, module-boundaries, deep-modules, abstraction-quality, information-hiding, general-vs-special, pull-complexity-down, error-design, naming-obviousness, and comments-docs, and skips work when early phases find nothing actionable. If Phase 1 finds no measurable complexity and Phase 5 triggers zero flags, it terminates early.
When to Use It
- When you want a comprehensive, prioritized assessment of a file, module, or PR instead of a single-lens check
- When reviewing a target with potential structural or interface complexity to guide deeper analysis
- When you need to validate module boundaries, abstraction quality, and naming/docs together
- When you want to surface red flags that earlier phases might miss via the 17-flag checklist
- During design audits to assess overall design quality across components and packages
Quick Start
- Step 1: Specify the target file, module, or PR to review
- Step 2: Run design-review to execute Phases 1–5 in order
- Step 3: Inspect Phase results, address high-priority issues, and rerun as needed
Best Practices
- Run design-review after using individual lenses to get a synthesized, prioritized set of findings
- Let Phase 1 results drive which modules advance to Phase 2 to focus effort
- Prioritize findings by syndrome clusters and impact to guide remediation
- Use Phase 5 red-flags sweep to catch issues missed by earlier phases (No Alternatives Considered, Tactical Momentum, Catch-and-Ignore, etc.)
- Document the rationale for high-priority findings to align teams and enable easy re-review
Example Use Cases
- Auditing a PR introducing a new module to verify boundaries and abstraction layers
- Assessing a large feature branch for structural flaws before merge
- Reviewing a package to ensure naming, comments, and docs match intended design intent
- Evaluating a legacy module to surface naming and documentation gaps across layers
- Comparing two architectural approaches to decide which design preserves boundaries and abstraction