red-book
npx machina-cli add skill justinjdev/fellowship/red-book --openclawRed Book — Learn From PR Reviews
Overview
Extracts conventions from PR review feedback and adds them to CLAUDE.md. This is the missing piece of the convention learning loop: chronicle bootstraps conventions, gather-lore studies them, warden enforces them, and this skill captures new ones from real reviewer feedback.
When to Use
- After receiving PR review comments that reveal a convention you didn't know about
- After a "wrong approach" rejection — the reviewer's feedback is the most valuable signal
- Periodically, to process accumulated review feedback across recent PRs
Process
Step 1: Gather Feedback
Ask the user for the source:
"Paste the review comments, or give me a PR URL and I'll read the comments."
If given a PR URL, use gh api repos/{owner}/{repo}/pulls/{number}/comments and gh api repos/{owner}/{repo}/pulls/{number}/reviews to fetch review comments.
Step 2: Classify Comments
For each review comment, classify it:
| Category | Description | Action |
|---|---|---|
| Convention | "We don't do it that way" — reveals a pattern or rule | Extract as a convention |
| Bug/Logic | Points out a functional error | Skip — not a convention |
| Nit/Style | Minor formatting preference | Extract only if it recurs across reviews |
| Question | Reviewer asking for clarification | Skip — not a convention |
Present the classification to the user: "Here's how I categorized the feedback. Any I got wrong?"
Step 3: Extract Conventions
For each comment classified as Convention, formalize it:
- **[Category]: [Rule]** — [Why, from reviewer's comment]
Reference: [file:line from the PR where this was flagged]
Categories should match existing CLAUDE.md sections (e.g., Structure, Error Handling, Naming, Data Flow). If a comment doesn't fit an existing category, propose a new one.
Step 4: Check for Duplicates
Read the current CLAUDE.md ## Review Conventions section. For each extracted convention:
- If it's already documented: skip, note that it was missed during implementation (warden should have caught it)
- If it's a refinement of an existing rule: propose updating the existing rule
- If it's new: propose adding it
Step 5: Update CLAUDE.md
Present the proposed additions/updates:
"Here's what I'd add to CLAUDE.md. Approve, edit, or skip each one."
For approved conventions, add them to the ## Review Conventions section under the appropriate category. If the section doesn't exist, create it.
Step 6: Cross-Reference
Check if the new conventions would be caught by existing skills:
- gather-lore: Would studying reference files have revealed this pattern? If yes, note which reference file demonstrates it.
- warden: Is the convention specific enough for warden to check mechanically? If not, refine the wording until it is.
Offer to add reference file entries to ## Reference Files if appropriate.
Key Principles
- Reviewer feedback is the highest-signal source. It tells you what actually gets flagged, not what might get flagged.
- One convention per comment. Don't over-generalize from a single piece of feedback.
- Specific and actionable. "Use proper error handling" is useless. "Service errors must use AppError with HTTP status codes, not raw Error" is useful.
- Warden-checkable. If you can't imagine warden mechanically verifying the rule, it's too vague.
Source
git clone https://github.com/justinjdev/fellowship/blob/main/skills/red-book/SKILL.mdView on GitHub Overview
Red Book automatically learns from PR review feedback by extracting conventions revealed in reviewer comments and adding them to CLAUDE.md. It closes the convention learning loop by documenting patterns that reviewers flag and making them checkable for future PRs.
How This Skill Works
Gather PR feedback (paste comments or provide a PR URL) and fetch review comments via the GitHub API. Classify each comment into categories (Convention, Bug/Logic, Nit/Style, Question) and extract true conventions, formatting them for CLAUDE.md as structured entries. Check for duplicates against the current CLAUDE.md, propose additions or refinements, update CLAUDE.md, and cross-reference new conventions with gather-lore and warden to ensure they are detectable and enforceable.
When to Use It
- After review comments reveal a convention you didn't know.
- After a "wrong approach" rejection—the reviewer's feedback signals a new rule.
- Periodically process accumulated review feedback across recent PRs.
- When CLAUDE.md lacks a convention that was flagged by new reviews.
- Before merging, to ensure newly identified conventions are reviewed and linked to references.
Quick Start
- Step 1: Paste the PR review comments or provide a PR URL for analysis.
- Step 2: The tool classifies feedback and extracts candidate conventions for CLAUDE.md.
- Step 3: Review proposed CLAUDE.md updates and approve, edit, or skip each one.
Best Practices
- Rely on reviewer feedback as the highest-signal source.
- Apply one convention per comment to avoid over-generalization.
- Make rules specific and actionable (not vague).
- Ensure conventions are warden-checkable and mechanically verifiable.
- Match CLAUDE.md categories (Structure, Error Handling, Naming, Data Flow) when documenting conventions.
Example Use Cases
- - [Structure]: All public APIs should be exposed via a single module entry point to reduce surface area and improve discoverability.
- - [Error Handling]: Use AppError with HTTP status codes, not raw Errors, for consistent error signaling.
- - [Naming]: Use camelCase for variables and functions; PascalCase for types and classes; avoid ambiguous abbreviations.
- - [Data Flow]: Validate inputs at the boundary and pass validated data to downstream components.
- - [Nit/Style]: Apply recurring style prefs only if they recur across multiple reviews.