quality-checklist
npx machina-cli add skill a5c-ai/babysitter/quality-checklist --openclawQuality Checklist
Overview
Post-implementation quality gate that validates the completed work against constitution standards, specification requirements, and custom quality checks. Produces a scored assessment with remediation recommendations for any failures.
When to Use
- After implementation is complete, before declaring done
- When validating code quality against constitution standards
- When verifying specification requirement coverage
- When running custom project-specific quality checks
Key Principle
Quality validation must be objective, reproducible, and multi-dimensional. Failed items must have actionable remediation recommendations. The checklist supports convergence loops -- re-validate after fixes until quality threshold is met.
Process
- Validate code quality - Check against constitution coding standards
- Verify test coverage - Ensure coverage meets constitution thresholds
- Check spec satisfaction - Verify all requirements are implemented
- Assess performance - Validate against constitution benchmarks
- Verify security - Check compliance with constitution security constraints
- Execute custom checks - Run any project-specific quality checks
- Score overall quality - Weighted average across categories (0-100)
- Produce recommendations - Actionable fixes for failed items
- Remediation loop - Re-validate after fixes (up to 3 iterations)
Tool Use
Invoke via babysitter process: methodologies/spec-kit/spec-kit-implementation (quality checklist phase)
Full pipeline: methodologies/spec-kit/spec-kit-orchestrator
Source
git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/methodologies/spec-kit/skills/quality-checklist/SKILL.mdView on GitHub Overview
The Quality Checklist is a post-implementation gate that validates the completed work against constitution standards, specification requirements, and custom quality checks. It produces a scored assessment with remediation recommendations for any failures.
How This Skill Works
It runs a multi-step validation across code quality (constitution standards), test coverage thresholds, spec satisfaction, performance, security, and any project-specific checks. It computes a weighted score (0-100) and outputs actionable remediation for failed items, enabling a convergence loop that allows re-validation up to three iterations.
When to Use It
- After implementation is complete, before declaring done.
- When validating code quality against constitution standards.
- When verifying specification requirement coverage.
- When running custom project-specific quality checks.
- During QA and release readiness to gate changes.
Quick Start
- Step 1: Trigger the quality-checklist phase via babysitter: methodologies/spec-kit/spec-kit-implementation.
- Step 2: Run through code quality, test coverage, spec satisfaction, performance, security, and custom checks.
- Step 3: Review the 0-100 score, apply remediation for failed items, and re-run validations (up to 3 iterations).
Best Practices
- Define baseline constitution standards and spec requirements before starting.
- Automate repeated checks and ensure deterministic results.
- Prioritize remediation items by impact and effort with clear owners.
- Use the convergence loop: re-run validations up to 3 iterations after fixes.
- Document remediation steps and verify that fixes address root causes.
Example Use Cases
- Code passes quality checklist with a 92 score and no remediation required.
- Security check flags a credential leakage; remediation included secret rotation and updated tests.
- Performance criteria fail; implemented caching and query optimization; score improved to pass.
- Custom checks detect unused dependencies; removed and re-validated.
- All criteria met after iterative fixes; product ready for release.