build
npx machina-cli add skill ajaywadhara/agentic-sdlc-plugin/build --openclawArguments: $FEATURE
Read CLAUDE.md before doing anything else. Read the acceptance criteria for "$FEATURE" in docs/PRD.md.
━━━ STEP 1: SPEC AGENT (isolated subagent) ━━━
Spawn a subagent with ONLY this context:
- The acceptance criteria for $FEATURE from PRD.md
- CLAUDE.md (for naming conventions and patterns)
- The relevant wireframe from wireframes/ if a UI feature
Subagent instruction: "Write failing tests for $FEATURE to: tests/unit/$FEATURE.test.ts tests/integration/$FEATURE.integration.test.ts
Tests must be RED. Do not write any implementation code. Write tests that would pass if the feature worked exactly as described in the acceptance criteria — not tests that are easy to make pass."
━━━ STEP 2: CONFIRM RED ━━━
Run: npm run test -- $FEATURE
If tests pass (shouldn't happen yet): something is wrong. Inspect and fix. If tests fail with import errors only: fix imports, this is not real RED. If tests fail because the feature doesn't exist: this is correct RED. Proceed.
━━━ STEP 3: IMPLEMENT AGENT (fresh subagent, isolated context) ━━━
Spawn a new subagent with ONLY this context:
- The failing test files
- CLAUDE.md
- Relevant wireframe
Subagent instruction: "The test files contain failing tests for $FEATURE. Implement the minimum code necessary to make ALL tests pass. Read CLAUDE.md for patterns and conventions. Do not add any functionality that is not tested."
━━━ STEP 4: CONFIRM GREEN ━━━
Run: npm run test -- $FEATURE
If GREEN: proceed to refactor. If RED: spawn FIX AGENT: "Tests are still failing. Error output: [paste exact output]. Fix the implementation only. Do not change test assertions. The tests are correct. The implementation is wrong." Re-run. If still RED after 3 attempts: stop and report the issue.
━━━ STEP 5: REFACTOR ━━━
Spawn REFACTOR AGENT: "All tests are GREEN for $FEATURE. Refactor the implementation for clarity, performance, and consistency with CLAUDE.md patterns. Tests must remain GREEN after every change. Run tests after each significant change."
━━━ STEP 6: HAND OFF ━━━
For web applications (UI features), run BOTH: /test-ui $FEATURE — interactive browser testing via Playwright MCP /qa-run $FEATURE — full 8-agent QA loop
/test-ui explores the live app and generates initial Playwright test files. /qa-run then runs the complete QA pipeline including those generated tests.
For non-UI features (API, data layer, utilities): /qa-run $FEATURE — skip /test-ui, go straight to QA loop
Source
git clone https://github.com/ajaywadhara/agentic-sdlc-plugin/blob/main/skills/build/SKILL.mdView on GitHub Overview
This skill runs a guided TDD feature build loop for a given feature ($FEATURE). It guides creating failing tests (RED) first, implementing the minimal code to pass (GREEN), and then refactoring, using per-feature test files and CLAUDE/PRD guidance for naming and acceptance criteria.
How This Skill Works
The workflow uses isolated subagents with narrowly scoped context. Step 1 generates RED tests in tests/unit/$FEATURE.test.ts and tests/integration/$FEATURE.integration.test.ts based on PRD acceptance criteria and CLAUDE.md. Step 2 runs npm run test -- $FEATURE to confirm RED. Step 3 implements the minimal code to satisfy the failing tests, then you re-run tests to verify GREEN, followed by a refactor pass guided by CLAUDE.md patterns.
When to Use It
- You are adding a new feature with explicit acceptance criteria in PRD.md.
- You want to enforce a RED-first TDD workflow to reduce risk before implementation.
- You need isolated feature work to avoid cross-feature interference using subagents.
- You are validating tests by failing on missing functionality and verifying import paths.
- You are prepping for a QA pass by ensuring unit and integration tests cover the feature.
Quick Start
- Step 1: Spawn SPEC AGENT to generate failing tests for $FEATURE (tests/unit/$FEATURE.test.ts and tests/integration/$FEATURE.integration.test.ts) following PRD/CLAUDE.
- Step 2: Run npm run test -- $FEATURE to confirm RED; fix as needed.
- Step 3: Spawn IMPLEMENT AGENT to write minimal code to satisfy tests, then re-run tests and proceed to a REFACTOR pass.
Best Practices
- Read CLAUDE.md and the PRD acceptance criteria before starting.
- Name tests and files exactly as tests/unit/$FEATURE.test.ts and tests/integration/$FEATURE.integration.test.ts.
- Always start with RED tests and implement the minimum code to pass.
- Keep changes scoped to the feature boundary; avoid new functionality not covered by tests.
- After GREEN, perform a structured refactor that aligns with CLAUDE.md conventions.
Example Use Cases
- Add a new user authentication feature: write RED tests for login failure, then implement login logic to satisfy tests and refactor for cleaner authentication flow.
- Implement a feature flag toggle: create RED unit/integration tests named after the feature, then code the toggle and clean up.
- Create a product search API endpoint: stub RED tests in unit/integration, implement endpoint logic to pass, then refactor.
- Refactor a data persistence helper: keep public API intact while improving internal structure once tests are GREEN.
- Hook up a UI feature by driving with unit tests first, followed by integration tests and a UI-level QA pass.