review-core
npx machina-cli add skill athola/claude-night-market/review-core --openclawCore Review Workflow
Table of Contents
- When to Use
- Activation Patterns
- Required TodoWrite Items
- Step 1 – Establish Context
- Step 2 – Inventory Scope
- Step 3 – Capture Evidence
- Step 4 – Structure Deliverables
- Step 5 – Contingency Plan
- Troubleshooting
When To Use
- Use this skill at the beginning of any detailed review workflow (e.g., for architecture, math, or an API).
- It provides a consistent structure for capturing context, logging evidence, and formatting the final report, which makes the findings of different reviews comparable.
When NOT To Use
- Diff-focused analysis - use diff-analysis
Activation Patterns
Trigger Keywords: review, audit, analysis, assessment, evaluation, inspection Contextual Cues:
- "review this code/design/architecture"
- "conduct an audit of"
- "analyze this for issues"
- "evaluate the quality of"
- "perform an assessment"
Auto-Load When: Any review-specific workflow is detected or when analysis methodologies are requested.
Required TodoWrite Items
review-core:context-establishedreview-core:scope-inventoriedreview-core:evidence-capturedreview-core:deliverables-structuredreview-core:contingencies-documented
Step 1 – Establish Context (review-core:context-established)
- Confirm
pwd, repo, branch, and upstream base (e.g.,git status -sb,git rev-parse --abbrev-ref HEAD). - Note comparison target (merge base, release tag) so later diffs reference a concrete range.
- Summarize the feature/bug/initiative under review plus stakeholders and deadlines.
Step 2 – Inventory Scope (review-core:scope-inventoried)
- List relevant artifacts for this review: source files, configs, docs, specs, generated assets (OpenAPI, Makefiles, ADRs, notebooks, etc.).
- Record how you enumerated them (commands like
rg --files -g '*.mk',ls docs,cargo metadata). - Capture assumptions or constraints inherited from the plan/issue so the domain-specific analysis can cite them.
Step 3 – Capture Evidence (review-core:evidence-captured)
- Log every command/output that informs the review (e.g.,
git diff --stat,make -pn,cargo doc,web.runcitations). Keep snippets or line numbers for later reference. - Track open questions or variances found during preflight; if they block progress, record owners/timelines now.
Step 4 – Structure Deliverables (review-core:deliverables-structured)
- Prepare the reporting skeleton shared by all reviews:
- Summary (baseline, scope, recommendation)
- Ordered findings (severity, file:line, principle violated, remediation)
- Follow-up tasks (owner + due date)
- Evidence appendix (commands, URLs, notebooks)
- validate the domain-specific checklist will populate each section before concluding.
Step 5 – Contingency Plan (review-core:contingencies-documented)
- If a required tool or skill is unavailable (e.g.,
web.run), document the alternative steps that will be taken and any limitations this introduces. This helps reviewers understand any gaps in coverage. - Note any outstanding approvals or data needed to complete the review.
Exit Criteria
- All TodoWrite items complete with concrete notes (commands run, files listed, evidence paths).
- Domain-specific review can now assume consistent context/evidence/deliverable scaffolding and focus on specialized analysis.
Troubleshooting
Common Issues
Command not found Ensure all dependencies are installed and in PATH
Permission errors Check file permissions and run with appropriate privileges
Unexpected behavior
Enable verbose logging with --verbose flag
Source
git clone https://github.com/athola/claude-night-market/blob/master/plugins/imbue/skills/review-core/SKILL.mdView on GitHub Overview
review-core provides a structured scaffold to begin any detailed review, ensuring consistent context, evidence capture, and deliverables. It standardizes the setup for reviews of architecture, code/design, APIs, or other initiatives, enabling comparable findings across projects.
How This Skill Works
Activated at the start of a review, review-core guides you through establishing context, inventorying scope, capturing evidence, and structuring deliverables. It also defines the required TodoWrite items to keep progress aligned with the review plan.
When to Use It
- Beginning a detailed review workflow (e.g., architecture, code/design, or API reviews).
- Auditing a feature or subsystem where consistent context and evidence are required.
- Starting a security, reliability, or quality audit with a formal deliverables format.
- Preparing for a pre-release review to ensure alignment before shipping.
- Reviewing documentation, ADRs, or API specs to standardize findings.
Quick Start
- Step 1: Establish Context (pwd, repo, branch, upstream base) and summarize the review target.
- Step 2: Inventory Scope: enumerate artifacts and record the enumeration method (e.g., rg, ls, cargo metadata).
- Step 3: Capture Evidence: log commands/outputs, record open questions, and assign owners if blockers appear.
Best Practices
- Confirm pwd, repo, branch, and upstream base and note the comparison target (Step 1).
- List relevant artifacts (source files, configs, docs, specs) and record how you enumerated them.
- Log every command/output that informs the review and capture open questions or blockers.
- Use the reporting skeleton (summary, findings, tasks, evidence) to structure deliverables.
- Document contingencies if needed tools or data are unavailable (e.g., web.run).
Example Use Cases
- Reviewing a new API endpoint in a microservice repository to align findings.
- Conducting an architecture design review for a feature across services.
- Running a code/design audit to capture consistent context and evidence.
- Performing a security-focused review of a critical module.
- Reviewing API specs and ADRs to ensure consistent documentation.