Define Validation
npx machina-cli add skill rjroy/vibe-garden/define-validation --openclawDefine Validation
Define how the AI validates its work before declaring done.
When to Use
- Spec or plan exists but lacks AI Validation section
- Starting work without formal spec/plan
- Want to make validation criteria explicit for any chunk of work
- Reviewing existing criteria for completeness
Process
- Identify the work: Read any existing spec, plan, or gather context from conversation
- Start with defaults: Always include the standard validation checklist
- Probe for custom needs: Ask "Does this feature need any specific verification beyond the defaults?"
- Output the criteria: Present for user confirmation
- Save or append: Either update existing spec/plan or save standalone
Output
If a spec or plan exists, offer to append the AI Validation section to it.
If no formal document exists, save to .lore/validation/[feature-or-work].md
Validation Criteria Structure
## AI Validation
**Defaults** (apply unless overridden):
- Unit tests with mocked time/network/filesystem/LLM calls (including Agent SDK `query()`)
- 90%+ coverage on new code
- Code review by fresh-context sub-agent
**Custom**:
- [Feature-specific validation steps]
Defaults Explained
These apply to virtually all work:
| Default | Why |
|---|---|
| Mock time | Tests shouldn't depend on when they run |
| Mock network | Tests shouldn't fail due to connectivity |
| Mock filesystem | Tests should be isolated and reproducible |
| Mock LLM calls | Agent SDK query() is an external API, costs money, can fail |
| 90%+ coverage | New code should be exercised by tests |
| Code review | Fresh-context sub-agent catches what the implementer misses |
Custom Validation Examples
When probing for custom needs, consider:
- CLI tools: "Output matches expected format in examples/"
- Parsers: "All test fixtures parse without errors"
- Generators: "Generated files are syntactically valid"
- Integrations: "Integration test passes against staging/mock API"
- UI components: "Renders without console errors in test harness"
- Data migrations: "Round-trip preserves data integrity"
Standalone Document Structure
When no spec/plan exists:
# Validation: [Work Description]
**For**: Brief description of what's being built
## AI Validation
**Defaults** (apply unless overridden):
- Unit tests with mocked time/network/filesystem/LLM calls (including Agent SDK `query()`)
- 90%+ coverage on new code
- Code review by fresh-context sub-agent
**Custom**:
- [Feature-specific items]
## Context
How this validation criteria was derived (conversation, informal description, etc.)
Keep It Actionable
Validation criteria must be things the AI can actually do:
- "Run the test suite" - actionable
- "Verify the user experience is good" - not actionable
- "Check output matches examples/expected.json" - actionable
- "Ensure performance is acceptable" - not actionable (unless threshold defined)
Source
git clone https://github.com/rjroy/vibe-garden/blob/main/lore-development/skills/define-validation/SKILL.mdView on GitHub Overview
Define Validation helps you establish concrete AI‑driven success criteria for work in progress. Use it when a spec or plan lacks validation, or when starting without formal documentation, to generate clear, actionable checks that can be appended to existing docs or saved as a standalone provision.
How This Skill Works
Identify the work from the spec/plan or conversation context. Start with the Defaults (unit tests with mocks, 90%+ coverage, code review). Probe for custom needs by asking targeted questions. Output the criteria for user confirmation and then save or append to the existing document, or create a standalone .lore/validation entry.
When to Use It
- Spec or plan exists but lacks AI Validation section.
- Starting work without formal spec/plan.
- Want to make validation criteria explicit for any chunk of work.
- Reviewing existing validation criteria for completeness.
- Need feature-specific validation beyond the defaults.
Quick Start
- Step 1: Read the current spec/plan or project context to understand scope.
- Step 2: Apply the standard defaults and ask for any feature-specific checks.
- Step 3: Present the AI Validation criteria for confirmation and save to .lore/validation or append to the existing doc.
Best Practices
- Always start with the standard validation checklist by default.
- Ask about custom verification needs for the feature.
- Make the output criteria explicit and testable.
- Prefer appending to existing docs when possible.
- Keep criteria actionable and verifiable (e.g., run the test suite).
Example Use Cases
- CLI tools: Output matches expected format in examples/
- Parsers: All test fixtures parse without errors
- Generators: Generated files are syntactically valid
- Integrations: Integration tests pass against staging/mock API
- UI components: Renders without console errors in test harness