Get the FREE Ultimate OpenClaw Setup Guide →

code-review

Scanned
npx machina-cli add skill kasperjunge/agent-resources/code-review --openclaw
Files (1)
SKILL.md
4.5 KB

Code Review

Rigorous code review focused on quality, maintainability, and architectural soundness.

When to Use

  • After implementing a feature or fix
  • Before committing changes
  • When explicitly asked to review code
  • Before creating a PR

Input

Default: Use current staged/committed changes.

If argument provided:

  • GitHub issue number/URL: Fetch context with scripts/gh_issue_phase.sh get-issue $ARG to understand the original task requirements and decisions from prior phases.

Method

Start by inspecting the changes. Use the deterministic script to collect the review context:

scripts/collect_review_context.sh

If on the main branch, review the staged git diff. If on a different branch, review committed and uncommitted changes compared to main.

Dispatch two subagents to carefully review the code changes. Tell them they're competing with another agent - whoever finds more legitimate issues wins honour and glory. Make sure they examine both architecture AND implementation, and check every criterion below.

Review Criteria

1. Code Quality

CheckLook For
DRYDuplicated logic, copy-pasted code, repeated patterns that should be abstracted
Code BloatUnnecessary code, over-engineering, premature abstractions, dead code
BugsLogic errors, edge cases, off-by-one errors, null/undefined handling

2. Code Slop & Technical Debt

SymptomDescription
Magic valuesHardcoded strings/numbers without constants
Inconsistent namingMixed conventions, unclear names
Missing error handlingUnhandled exceptions, silent failures
TODO/FIXME commentsDeferred work that should be tracked
Commented-out codeDelete it or explain why it exists
Dependency bloatNew deps when stdlib/existing deps suffice

3. Architecture (in context of broader system)

PrincipleReview Questions
ModularityAre changes properly bounded? Do they respect module boundaries?
CohesionDoes each unit have a single, clear responsibility?
Separation of ConcernsIs business logic mixed with presentation/data access?
Information HidingAre implementation details properly encapsulated?
CouplingDoes this create tight coupling? Are dependencies appropriate?

4. Devil's Advocate

Challenge the implementation:

  • Is this the simplest solution? Could it be simpler?
  • What happens under load/scale?
  • What are the failure modes?
  • What assumptions might be wrong?
  • Is there a more fundamentally correct approach, even if harder?

5. Test Effectiveness

CheckCriteria
CoverageAre the important paths tested?
Meaningful assertionsDo tests verify behavior, not implementation?
Edge casesAre boundaries and error conditions tested?
ReadabilityCan you understand what's tested from test names?
FragilityWill tests break on valid refactors?

Output Format

Report findings organized by severity:

## Code Review Findings

### Critical (must fix)
- [Issue]: [Location] - [Why it matters]

### Important (should fix)
- [Issue]: [Location] - [Recommendation]

### Minor (consider fixing)
- [Issue]: [Location] - [Suggestion]

### Positive Observations
- [What was done well]

GitHub Issue Tracking

If a GitHub issue was provided or is available from prior phases:

Post review findings as a phase comment and set the label:

echo "$REVIEW_SUMMARY" | scripts/gh_issue_phase.sh post-phase $ISSUE review
scripts/gh_issue_phase.sh set-label $ISSUE phase:review

Pass the issue number to the next skill (e.g., /commit #42).

Common Mistakes

MistakeCorrection
Surface-level reviewDig into logic, trace data flow
Ignoring contextReview changes in relation to the system
Only finding negativesNote what's done well
Vague feedbackBe specific: file, line, concrete suggestion
BikesheddingFocus on impact, not style preferences

Red Flags - STOP and Investigate

  • New dependencies added without clear justification
  • Changes that bypass existing patterns without explanation
  • Test coverage decreased
  • Complex logic without tests
  • Security-sensitive code modified

Source

git clone https://github.com/kasperjunge/agent-resources/blob/main/skills/development/workflow/code-review/SKILL.mdView on GitHub

Overview

Code Review analyzes changes before they merge, emphasizing quality, maintainability, and architectural soundness. It helps catch defects early, enforce standards, and ensure decisions align with project goals.

How This Skill Works

Start by inspecting the changes and collecting review context with scripts/collect_review_context.sh. If you’re on main, review the staged diff; otherwise review committed and uncommitted changes against main. Then dispatch two subagents to evaluate architecture and implementation, comparing findings and focusing on the criteria listed.

When to Use It

  • After implementing a feature or fix
  • Before committing changes
  • When explicitly asked to review code
  • Before creating a pull request (PR)
  • During an open PR review to reassess changes

Quick Start

  1. Step 1: Run scripts/collect_review_context.sh to gather context.
  2. Step 2: On main, review the staged diff; on other branches, compare against main for committed and uncommitted changes.
  3. Step 3: Dispatch two subagents to review, then consolidate findings and report per severity.

Best Practices

  • Run the deterministic review context script (scripts/collect_review_context.sh) to start.
  • Review both architecture and implementation against the defined criteria (DRY, debt, modularity).
  • If a GitHub issue is provided, fetch its context with scripts/gh_issue_phase.sh get-issue $ARG to understand requirements.
  • Keep changes bounded and respect module boundaries; watch for coupling and information hiding.
  • Document findings using the prescribed Code Review Findings format with clear severities.

Example Use Cases

  • Adding a new API endpoint with input validation and error handling.
  • Refactoring to remove duplicated logic and extract shared utilities.
  • Fixing a null/undefined handling bug and adding defensive checks.
  • Addressing a potential performance bottleneck and eliminating code bloat.
  • Improving test coverage for edge cases and adding meaningful assertions.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers