vibe-iteration-review
npx machina-cli add skill ash1794/vibe-engineering/iteration-review --openclawvibe-iteration-review
What gets measured gets improved. Review every iteration.
When to Use This Skill
- End of a development sprint or iteration
- After completing a milestone
- Before starting the next iteration (retrospective)
- When the user asks "how did we do?"
When NOT to Use This Skill
- Mid-iteration (too early, incomplete data)
- After single small tasks
- When there's no defined iteration boundary
Steps
-
Gather data:
- What was planned for this iteration?
- What was actually delivered?
- How many commits? Lines changed? Tests added?
- Coverage delta? Quality metrics?
- Known issues introduced?
-
Grade quality (A-F):
- A: All planned work delivered, tests pass, no known issues, clean code
- B: Most work delivered, minor issues, good test coverage
- C: Core work delivered, some gaps, acceptable quality
- D: Significant gaps, quality issues, needs rework
- F: Iteration failed, major rework needed
-
Compare to previous iterations (if available):
- Is quality trending up or down?
- Is velocity improving?
- Are known issues accumulating?
-
Extract lessons:
- What went well?
- What went poorly?
- What to change next iteration?
Output Format
Iteration Review: [Iteration Name/Number]
Quality Grade: A/B/C/D/F Planned vs Delivered: X/Y (Z%)
| Metric | This Iteration | Previous | Trend |
|---|---|---|---|
| Commits | X | Y | ↑/↓/→ |
| Tests added | X | Y | ↑/↓/→ |
| Coverage | X% | Y% | ↑/↓/→ |
| Known issues | X | Y | ↑/↓/→ |
Delivered
- [Task 1]
- [Task 2]
- [Task 3] — deferred because [reason]
Lessons Learned
- [What to continue]
- [What to change]
- [What to stop]
Source
git clone https://github.com/ash1794/vibe-engineering/blob/master/skills/iteration-review/SKILL.mdView on GitHub Overview
vibe-iteration-review is a structured end-of-sprint evaluation that captures quality metrics, grades overall quality, and analyzes trends. It helps teams close iterations with clear insights and actionable lessons for the next cycle.
How This Skill Works
The skill collects data on planned versus delivered work, commits, lines changed, tests added, and coverage deltas, plus known issues. It then assigns a Quality Grade (A–F), compares results to previous iterations to identify trends, and extracts concrete lessons (what went well, what didn’t, what to change). The output is formatted to mirror the Iteration Review structure for easy sharing with stakeholders.
When to Use It
- End of a development sprint or iteration
- After completing a milestone
- Before starting the next iteration (retrospective)
- When the user asks "how did we do?"
- During release readiness reviews to summarize quality trends
Quick Start
- Step 1: Gather data: planned vs delivered, commits, lines changed, tests added, coverage delta, and known issues
- Step 2: Grade quality with A–F and compare to previous iterations to identify trends
- Step 3: Fill out the Output Format: Iteration Review header, metrics table, Delivered tasks, and Lessons Learned
Best Practices
- Gather data from all relevant sources: planned vs delivered, commits, lines changed, tests added, coverage delta, and known issues
- Apply a clear A–F quality grading rubric and justify each grade with evidence
- Compare current results to previous iterations to assess trend direction (quality and velocity)
- Document actionable lessons: what went well, what didn’t, and what to change next iteration
- Present results in the Iteration Review format to ensure consistency across teams
Example Use Cases
- Sprint end: a reviewer grades quality, notes coverage improvements, and lists rework items for the next sprint
- Milestone wrap-up: trends in test coverage and known issues are surfaced to stakeholders
- Retrospective prep: the team compiles What went well/What to change for the next cycle
- Stakeholder query: exec asks how the sprint performed, and the report provides a concise answer with metrics
- Cross-team review: velocity and quality trends are aligned across multiple teams to synchronize roadmaps