ln-005-agent-reviewer
Scannednpx machina-cli add skill levnikolaevich/claude-code-skills/ln-005-agent-reviewer --openclawPaths: File paths (
shared/,references/,../ln-*) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root.
Agent Reviewer (Universal)
Runs parallel external agent reviews on arbitrary context, critically verifies suggestions, returns filtered improvements.
Purpose & Scope
- Standalone utility in 0XX category (like ln-003, ln-004)
- Delegate any context to codex-review + gemini-review as background tasks in parallel
- Context always passed via file references (never inline in prompt)
- Process results as they arrive (first-finished agent processed immediately)
- Critically verify each suggestion; debate with agent if Claude disagrees
- Return filtered, deduplicated, verified suggestions
When to Use
- Manual invocation by user for independent review of any artifact
- Called by any skill needing external second opinion on plans, decisions, documents
- NOT tied to Linear, NOT tied to any pipeline
- Works with any context that can be saved to a file
Parameters
| Parameter | Value |
|---|---|
review_type | contextreview |
skill_group | 005 |
prompt_template | shared/agents/prompt_templates/context_review.md |
verdict_acceptable | CONTEXT_ACCEPTABLE |
Inputs
| Input | Required | Description |
|---|---|---|
context_files | Yes | List of file paths containing context to review (relative to CWD) |
identifier | No | Short label for file naming (default: review_YYYYMMDD_HHMMSS) |
focus | No | List of areas to focus on (default: all 6) |
review_title | No | Human-readable title (default: "Context Review") |
Context delivery rule: Context is ALWAYS passed via files.
- If context already exists as files (plans, docs, code) -> pass file paths directly
- If context is a statement/decision from chat -> caller creates a temporary file in
.agent-review/context/with the content, then passes the file path
Workflow
MANDATORY READ: Load shared/references/agent_review_workflow.md for Health Check, Ensure .agent-review/, Load Review Memory, Run Agents, Critical Verification + Debate, Aggregate + Return, Save Review Summary, Fallback Rules, Critical Rules, and Definition of Done. Load shared/references/agent_delegation_pattern.md for Reference Passing Pattern, Review Persistence Pattern, Agent Timeout Policy, and Debate Protocol.
Unique Steps (before shared workflow)
-
Health check: per shared workflow, filter by
skill_group=005. -
Resolve identifier: If
identifiernot provided, generatereview_YYYYMMDD_HHMMSS. Sanitize: lowercase, replace spaces with hyphens, ASCII only. -
Ensure .agent-review/: per shared workflow. Additionally create
.agent-review/context/subdir if it doesn't exist (for materialized context files). -
Materialize context (if needed): If context is from chat/conversation (not an existing file):
- Write content to
.agent-review/context/{identifier}_context.md - Add this path to
context_fileslist
- Write content to
-
Build prompt: Read template
shared/agents/prompt_templates/context_review.md.- Replace
{review_title}with title or"Context Review" - Replace
{context_refs}with bullet list:- {path}per context file - Replace
{focus_areas}with filtered subset or"All default areas"if no focus specified - Save to
.agent-review/{identifier}_contextreview_prompt.md(single shared file -- both agents read the same prompt)
- Replace
Shared Workflow Steps
6-9) Load Review Memory, Run agents, Critical Verification + Debate, Aggregate + Return: per shared workflow.
{review_type}in challenge template = review_title or "Context Review"{story_ref}in challenge template = identifier
- Save Review Summary: per shared workflow "Step: Save Review Summary".
Output Format
verdict: CONTEXT_ACCEPTABLE | SUGGESTIONS | SKIPPED
suggestions:
- area: "logic | feasibility | completeness | consistency | best_practices | risk"
issue: "What is wrong or could be improved"
suggestion: "Specific actionable change"
confidence: 95
impact_percent: 15
source: "codex-review"
resolution: "accepted | accepted_after_debate | accepted_after_followup | rejected"
Agent stats and debate log per shared workflow output schema.
Verdict Escalation
- No escalation. Suggestions are advisory only.
- Caller decides how to apply accepted suggestions.
Reference Files
- Shared workflow:
shared/references/agent_review_workflow.md - Agent delegation pattern:
shared/references/agent_delegation_pattern.md - Prompt template (review):
shared/agents/prompt_templates/context_review.md - Review schema:
shared/agents/schemas/context_review_schema.json
Version: 1.0.0 Last Updated: 2026-02-25
Source
git clone https://github.com/levnikolaevich/claude-code-skills/blob/master/ln-005-agent-reviewer/SKILL.mdView on GitHub Overview
Agent Reviewer (Universal) delegates arbitrary context (plans, decisions, documents, architecture proposals) to Codex and Gemini for independent review using a debate protocol. Context is always passed via files, enabling non-inline prompts and reproducible references. Results are critically verified, deduplicated, and returned as filtered improvements.
How This Skill Works
Users supply context as file paths; if needed, materialize chat content into .agent-review/context/{identifier}_context.md. The skill builds a single prompt from the shared template shared/agents/prompt_templates/context_review.md, substituting the title, context_refs, and focus_areas, and saves it to .agent-review/{identifier}_contextreview_prompt.md. Codex and Gemini run in parallel, their suggestions are debated and validated, and a final, deduplicated set of improvements is returned.
When to Use It
- You need an independent second opinion on any artifact (plans, decisions, documents).
- You want parallel reviews from Codex and Gemini as background tasks.
- You have relevant context stored as files or can materialize it into .agent-review/context.
- You require critical verification, debate when Claude disagrees, and deduplicated results.
- You want a decoupled review workflow not tied to any pipeline or tool (not Linear).
Quick Start
- Step 1: Place all relevant context into files and list their paths in context_files; if needed, write content to .agent-review/context/{identifier}_context.md.
- Step 2: Run the skill to build .agent-review/{identifier}_contextreview_prompt.md from the template and start parallel reviews by Codex and Gemini.
- Step 3: Retrieve the aggregated, verified review summary and apply the suggested improvements.
Best Practices
- Keep all context in files and reference their paths; avoid inline prompts.
- Provide a clear review_title and optional focus areas to guide the debate.
- Ensure .agent-review/ and .agent-review/context/ exist before running.
- Materialize any chat content into a file using the context rule to maintain traceability.
- Review the aggregated results and apply the verified suggestions, ignoring duplicates.
Example Use Cases
- Review a software architecture proposal saved as plans/arc.md using external agent reviews.
- Independent review of a product roadmap decision saved as docs/roadmap.md.
- Audit an API specification (specs/api.yaml) for compatibility and security via external reviewers.
- Security policy draft (policies/security.md) reviewed for compliance against standards.
- ADR for a data pipeline architecture examined for justification and trade-offs.