review-implementing
npx machina-cli add skill mhattingpete/claude-skills-marketplace/review-implementing --openclawReview Feedback Implementation
Systematically process and implement changes based on code review feedback.
When to Use
- Provides reviewer comments or feedback
- Pastes PR review notes
- Mentions implementing review suggestions
- Says "address these comments" or "implement feedback"
- Shares list of changes requested by reviewers
Systematic Workflow
1. Parse Reviewer Notes
Identify individual feedback items:
- Split numbered lists (1., 2., etc.)
- Handle bullet points or unnumbered feedback
- Extract distinct change requests
- Clarify ambiguous items before starting
2. Create Todo List
Use TodoWrite tool to create actionable tasks:
- Each feedback item becomes one or more todos
- Break down complex feedback into smaller tasks
- Make tasks specific and measurable
- Mark first task as
in_progressbefore starting
Example:
- Add type hints to extract function
- Fix duplicate tag detection logic
- Update docstring in chain.py
- Add unit test for edge case
3. Implement Changes Systematically
For each todo item:
Locate relevant code:
- Use Grep to search for functions/classes
- Use Glob to find files by pattern
- Read current implementation
Make changes:
- Use Edit tool for modifications
- Follow project conventions (CLAUDE.md)
- Preserve existing functionality unless changing behavior
Verify changes:
- Check syntax correctness
- Run relevant tests if applicable
- Ensure changes address reviewer's intent
Update status:
- Mark todo as
completedimmediately after finishing - Move to next todo (only one
in_progressat a time)
4. Handle Different Feedback Types
Code changes:
- Use Edit tool for existing code
- Follow type hint conventions (PEP 604/585)
- Maintain consistent style
New features:
- Create new files with Write tool if needed
- Add corresponding tests
- Update documentation
Documentation:
- Update docstrings following project style
- Modify markdown files as needed
- Keep explanations concise
Tests:
- Write tests as functions, not classes
- Use descriptive names
- Follow pytest conventions
Refactoring:
- Preserve functionality
- Improve code structure
- Run tests to verify no regressions
5. Validation
After implementing changes:
- Run affected tests
- Check for linting errors:
uv run ruff check - Verify changes don't break existing functionality
6. Communication
Keep user informed:
- Update todo list in real-time
- Ask for clarification on ambiguous feedback
- Report blockers or challenges
- Summarize changes at completion
Edge Cases
Conflicting feedback:
- Ask user for guidance
- Explain conflict clearly
Breaking changes required:
- Notify user before implementing
- Discuss impact and alternatives
Tests fail after changes:
- Fix tests before marking todo complete
- Ensure all related tests pass
Referenced code doesn't exist:
- Ask user for clarification
- Verify understanding before proceeding
Important Guidelines
- Always use TodoWrite for tracking progress
- Mark todos completed immediately after each item
- Only one todo in_progress at any time
- Don't batch completions - update status in real-time
- Ask questions for unclear feedback
- Run tests if changes affect tested code
- Follow CLAUDE.md conventions for all code changes
- Use conventional commits if creating commits afterward
Source
git clone https://github.com/mhattingpete/claude-skills-marketplace/blob/main/engineering-workflow-plugin/skills/review-implementing/SKILL.mdView on GitHub Overview
Review Feedback Implementation helps you transform reviewer comments into concrete tasks, track progress with TodoWrite, and verify changes align with the reviewer’s intent. It provides a repeatable workflow for parsing feedback, creating actionable todos, and validating updates to preserve code quality.
How This Skill Works
First, parse reviewer notes into discrete change requests, clarifying ambiguities. Then, create todos with TodoWrite, breaking complex feedback into trackable steps. For each todo, locate the relevant code, apply edits, run relevant tests, and update status to completed before moving to the next item.
When to Use It
- Provides reviewer comments or feedback
- Pastes PR review notes
- Mentions implementing review suggestions
- Says 'address these comments' or 'implement feedback'
- Shares list of changes requested by reviewers
Quick Start
- Step 1: Parse reviewer notes and identify distinct changes
- Step 2: Create todos with TodoWrite and mark the first as in_progress
- Step 3: Implement each change, verify tests, and update status upon completion
Best Practices
- Always use TodoWrite for tracking progress
- Mark todos completed immediately after finishing
- Only one todo in_progress at any time
- Don't batch completions - update status in real-time
- Ask questions for unclear feedback
Example Use Cases
- Interpret reviewer notes and convert them into todos like 'Add type hints to extract function'
- Fix duplicate tag detection logic based on PR feedback
- Update docstrings in chain.py to reflect behavior
- Add unit test for edge case mentioned in review
- Implement a new test to cover a failing scenario from feedback