writing-plans
npx machina-cli add skill parthalon025/autonomous-coding-toolkit/writing-plans --openclawWriting Plans
Overview
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "I'm using the writing-plans skill to create the implementation plan."
Context: This should be run in a dedicated worktree (created by brainstorming skill).
Save plans to: docs/plans/YYYY-MM-DD-<feature-name>.md
Bite-Sized Task Granularity
Each step is one action (2-5 minutes):
- "Write the failing test" - step
- "Run it to make sure it fails" - step
- "Implement the minimal code to make the test pass" - step
- "Run the tests and make sure they pass" - step
- "Commit" - step
Plan Document Header
Every plan MUST start with this header:
# [Feature Name] Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
Task Structure
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
**Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
**Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
**Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
**Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
**Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
Remember
- Exact file paths always
- Complete code in plan (not "add validation")
- Exact commands with expected output
- Reference relevant skills with @ syntax
- DRY, YAGNI, TDD, frequent commits
Execution Handoff
After saving the plan, offer execution choice:
"Plan complete and saved to docs/plans/<filename>.md. Three execution options:
1. Subagent-Driven (this session) - I dispatch fresh subagent per task with two-stage review, fast iteration, you watch progress
2. Parallel Session (separate) - Open new session with executing-plans, batch execution with human review checkpoints
3. Headless (walk away) - Run scripts/run-plan.sh in the background. Fresh claude -p per batch, quality gates between batches, resume on interruption. Best for 5+ batch plans.
Which approach?"
If Subagent-Driven chosen:
- REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development
- Stay in this session
- Fresh subagent per task + code review
If Parallel Session chosen:
- Guide them to open new session in worktree
- REQUIRED SUB-SKILL: New session uses superpowers:executing-plans
If Headless chosen:
- Generate the run command with appropriate flags:
scripts/run-plan.sh docs/plans/<plan-file>.md --quality-gate "scripts/quality-gate.sh --project-root ." - If the plan has critical batches, suggest
--mode competitive --competitive-batches N,M - For long plans (10+ batches), suggest
--on-failure retry --max-retries 3 - Remind them:
--resumepicks up where it left off after interruption
Source
git clone https://github.com/parthalon025/autonomous-coding-toolkit/blob/main/skills/writing-plans/SKILL.mdView on GitHub Overview
Writing Plans creates comprehensive implementation plans when you have a spec for a multi-step task. It assumes engineers start with zero context and documents every needed file touch, code, tests, and docs, breaking work into bite-sized steps and frequent commits.
How This Skill Works
The skill outputs a Markdown plan with a standard header and Task N sections, each listing required Files to Create/Modify/Test, plus five steps (write the failing test, run it, implement minimal code, run tests, commit) and exact commands. Plans are saved to docs/plans/YYYY-MM-DD-<feature-name>.md for reproducibility.
When to Use It
- When you have a spec or requirements for a multi-step task and no code yet
- When you want a test-driven, bite-sized plan before touching code
- When you need exact file paths and commands documented for a feature
- When working in a dedicated worktree and need reproducible steps
- When you want plans saved to docs/plans/YYYY-MM-DD-<feature-name>.md
Quick Start
- Step 1: Gather the spec/requirements and decide the feature name
- Step 2: Create docs/plans/YYYY-MM-DD-<feature-name>.md and add the standard plan header
- Step 3: Break work into Task blocks with 5 steps each and save to the plan file
Best Practices
- DRY and YAGNI: keep steps small, avoid unnecessary work
- Follow TDD: announce tests early and drive implementation with failures
- Use exact file paths and commands for reproducibility
- Reference related skills with @ syntax to link context
- Commit frequently with meaningful messages
Example Use Cases
- Plan and implement a new REST endpoint from a given spec, including tests, docs, and integration steps
- Add a CLI command by outlining tasks, tests, and required files before coding
- Refactor a module with a plan-first approach to minimize risk and maximize test coverage
- Integrate a third-party API by detailing contracts, mocks, and end-to-end tests
- Create a feature scaffold (docs, tests, code) for a greenfield component based on requirements