writing-plans
Scannednpx machina-cli add skill Ibrahim-3d/conductor-orchestrator-superpowers/writing-plans --openclawWriting Plans
Overview
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "I'm using the writing-plans skill to create the implementation plan."
Context: This should be run in a dedicated worktree (created by brainstorming skill).
Save plans to: docs/plans/YYYY-MM-DD-<feature-name>.md
Bite-Sized Task Granularity
Each step is one action (2-5 minutes):
- "Write the failing test" - step
- "Run it to make sure it fails" - step
- "Implement the minimal code to make the test pass" - step
- "Run the tests and make sure they pass" - step
- "Commit" - step
Plan Document Header
Every plan MUST start with this header:
# [Feature Name] Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---
Task Structure
### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
**Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
assert result == expected
```
**Step 2: Run test to verify it fails**
Run: `pytest tests/path/test.py::test_name -v`
Expected: FAIL with "function not defined"
**Step 3: Write minimal implementation**
```python
def function(input):
return expected
```
**Step 4: Run test to verify it passes**
Run: `pytest tests/path/test.py::test_name -v`
Expected: PASS
**Step 5: Commit**
```bash
git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"
```
Remember
- Exact file paths always
- Complete code in plan (not "add validation")
- Exact commands with expected output
- Reference relevant skills with @ syntax
- DRY, YAGNI, TDD, frequent commits
Conductor Integration
When invoked with --output-dir and --spec parameters (from Conductor orchestrator):
- Save plan.md to the specified
--output-dir(NOTdocs/plans/) - Read spec from
--specpath - Read project context from
--context-filespaths - Include DAG section if
--include-dag=true - After saving, do NOT offer execution choice — return control to orchestrator
- Update
--metadatacheckpoint toPLAN: PASSED
When these parameters are absent, fall back to the standalone workflow below.
Execution Handoff (Standalone Mode)
After saving the plan, offer execution choice:
"Plan complete and saved to docs/plans/<filename>.md. Two execution options:
1. Subagent-Driven (this session) - I dispatch fresh subagent per task, review between tasks, fast iteration
2. Parallel Session (separate) - Open new session with executing-plans, batch execution with checkpoints
Which approach?"
If Subagent-Driven chosen:
- REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development
- Stay in this session
- Fresh subagent per task + code review
If Parallel Session chosen:
- Guide them to open new session in worktree
- REQUIRED SUB-SKILL: New session uses superpowers:executing-plans
Source
git clone https://github.com/Ibrahim-3d/conductor-orchestrator-superpowers/blob/master/skills/writing-plans/SKILL.mdView on GitHub Overview
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Announce at start: "I'm using the writing-plans skill to create the implementation plan." Context: run in a dedicated worktree. Save plans to: docs/plans/YYYY-MM-DD-<feature-name>.md. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, and how to test it. Give them the whole plan as bite-sized tasks, DRY, YAGNI, TDD, and frequent commits.
How This Skill Works
The skill generates a Markdown plan that starts with the required header and a detailed task breakdown. Each task lists exact file paths, test steps, and commit steps, ensuring DRY and TDD alignment. It supports standalone usage and Conductor integration by saving to the specified path and optionally handling --output-dir and --spec flows, after which control returns to the orchestrator.
When to Use It
- When you have a spec or requirements for a multi-step task and want a full implementation plan before coding.
- When the engineer is new to the codebase or problem domain and needs explicit guidance on files, tests, and docs.
- When you need to ensure tasks are broken into 2-5 minute bite-sized steps with clear success criteria.
- When you want to enforce DRY, YAGNI, and TDD principles and maintain frequent commits.
- When planning for Conductor execution or standalone handoff after plan generation.
Quick Start
- Step 1: Outline the plan header with Goal, Architecture, and Tech Stack; include the mandatory announcement line.
- Step 2: Break the feature into Task N sections, listing exact files to Create/Modify and corresponding tests.
- Step 3: Save the plan to docs/plans/YYYY-MM-DD-<feature-name>.md; if using Conductor, invoke with --output-dir and --spec and proceed without prompting for execution.
Best Practices
- Always start with the plan header and include Goal, Architecture, and Tech Stack sections.
- Reference exact file paths and keep the plan self-contained with complete code, tests, and docs guidance.
- Break work into 2-5 minute steps and include explicit Run/Commit steps in each task.
- Adhere to DRY, YAGNI, and TDD; commit frequently with clear messages.
- Save the plan to docs/plans/YYYY-MM-DD-<feature-name>.md and handle Conductor integration if requested.
Example Use Cases
- Add a multi-step onboarding flow with tests and documentation
- Implement a feature-flag gated REST endpoint with integration tests
- Refactor a legacy module and cover it with regression tests
- Introduce a new API endpoint with end-to-end tests and docs
- Plan a data migration with schema changes and verification tests