Get the FREE Ultimate OpenClaw Setup Guide →

shipyard-writing-plans

Scanned
npx machina-cli add skill lgbarn/shipyard/shipyard-writing-plans --openclaw
Files (1)
SKILL.md
6.2 KB
<!-- TOKEN BUDGET: 220 lines / ~660 tokens -->

Writing Plans

<activation>

When This Skill Activates

  • You have a spec, requirements, or design for a multi-step implementation task
  • You need to break work into bite-sized, executable tasks before touching code
  • You are preparing work for builder agents or a parallel execution session

Announce at start: "I'm using the writing-plans skill to create the implementation plan."

Context: This should be run in a dedicated worktree (created by brainstorming skill).

Save plans to: docs/plans/YYYY-MM-DD-<feature-name>.md

Natural Language Triggers

  • "write a plan", "create a plan", "plan this feature", "break this down into tasks"
</activation> <instructions>

Overview

Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.

Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.

Shipyard Plan Format

Shipyard plans use XML-structured tasks with verification criteria. Each task includes:

<task id="1" name="Component Name">
  <description>What this task accomplishes</description>
  <files>
    <create>exact/path/to/file.py</create>
    <modify>exact/path/to/existing.py:123-145</modify>
    <test>tests/exact/path/to/test.py</test>
  </files>
  <steps>
    <step>Write the failing test</step>
    <step>Run test to verify it fails</step>
    <step>Write minimal implementation</step>
    <step>Run test to verify it passes</step>
    <step>Commit</step>
  </steps>
  <verification>
    <command>pytest tests/path/test.py::test_name -v</command>
    <expected>PASS</expected>
  </verification>
</task>

This structured format enables /shipyard:build to parse and execute tasks systematically, and /shipyard:status to track progress.

Bite-Sized Task Granularity

Each step is one action (2-5 minutes):

  • "Write the failing test" - step
  • "Run it to make sure it fails" - step
  • "Implement the minimal code to make the test pass" - step
  • "Run the tests and make sure they pass" - step
  • "Commit" - step

Plan Document Header

Every plan MUST start with this header:

# [Feature Name] Implementation Plan

> **For Claude:** REQUIRED SUB-SKILL: Use shipyard:shipyard-executing-plans to implement this plan task-by-task.

**Goal:** [One sentence describing what this builds]

**Architecture:** [2-3 sentences about approach]

**Tech Stack:** [Key technologies/libraries]

---

Task Structure

### Task N: [Component Name]

**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`

**Step 1: Write the failing test**

```python
def test_specific_behavior():
    result = function(input)
    assert result == expected

Step 2: Run test to verify it fails

Run: pytest tests/path/test.py::test_name -v Expected: FAIL with "function not defined"

Step 3: Write minimal implementation

def function(input):
    return expected

Step 4: Run test to verify it passes

Run: pytest tests/path/test.py::test_name -v Expected: PASS

Step 5: Commit

git add tests/path/test.py src/path/file.py
git commit -m "feat: add specific feature"

</instructions>

<examples>

## Example: Well-Written vs Poorly-Written Plan Task

<example type="good" title="Clear, executable task with exact paths and code">
### Task 2: Add Email Validation

**Files:**
- Create: `src/validators/email.py`
- Test: `tests/validators/test_email.py`

**Step 1: Write the failing test**
```python
def test_rejects_empty_email():
    with pytest.raises(ValidationError, match="email is required"):
        validate_email("")

Step 2: Run test to verify it fails Run: pytest tests/validators/test_email.py::test_rejects_empty_email -v Expected: FAIL with "cannot import name 'validate_email'"

Step 3: Write minimal implementation

from src.errors import ValidationError

def validate_email(email: str) -> str:
    if not email or not email.strip():
        raise ValidationError("email is required")
    return email.strip()

Step 4: Run test to verify it passes Run: pytest tests/validators/test_email.py::test_rejects_empty_email -v Expected: PASS

Step 5: Commit

git add src/validators/email.py tests/validators/test_email.py
git commit -m "feat: add email validation with empty check"
</example> <example type="bad" title="Vague task that leaves builder guessing"> ### Task 2: Add Validation

Add email validation to the project. Make sure it handles edge cases. Write tests. </example>

The good example provides exact file paths, complete code, exact commands with expected output, and a single commit per TDD cycle. The bad example forces the builder to guess paths, invent code, and figure out what "edge cases" means.

</examples> <rules>

Remember

  • Exact file paths always
  • Complete code in plan (not "add validation")
  • Exact commands with expected output
  • Reference relevant skills with @ syntax
  • DRY, YAGNI, TDD, frequent commits
</rules>

Execution Handoff

After saving the plan, offer execution choice:

"Plan complete and saved to docs/plans/<filename>.md. Two execution options:

1. Agent-Driven (this session) - I dispatch fresh builder agent per task, review between tasks, fast iteration

2. Parallel Session (separate) - Open new session with executing-plans, batch execution with checkpoints

Which approach?"

If Agent-Driven chosen:

  • REQUIRED SUB-SKILL: Use shipyard:shipyard-executing-plans
  • Stay in this session
  • Fresh builder agent per task + two-stage review (spec compliance then code quality)

If Parallel Session chosen:

  • Guide them to open new session in worktree
  • REQUIRED SUB-SKILL: New session uses shipyard:shipyard-executing-plans

Source

git clone https://github.com/lgbarn/shipyard/blob/main/skills/shipyard-writing-plans/SKILL.mdView on GitHub

Overview

Shipyard-writing-plans converts a spec, requirements, or design into a complete implementation plan. It outputs comprehensive, bite-sized tasks with file touches, tests, steps, and verification in Shipyard's XML task format, saved under docs/plans for a dedicated worktree. This helps engineers with zero context understand exactly what to build and how to verify it.

How This Skill Works

Trigger with phrases like 'write a plan' or 'plan this feature' to generate a Shipyard-formatted plan. The tool assumes a fresh context engineer, and produces tasks detailing exact file operations, tests, step-by-step actions, and verification criteria, plus guidance for commits. Plans are saved to docs/plans/YYYY-MM-DD-<feature-name>.md and are designed to drive parallel execution and handoffs to builder agents.

When to Use It

  • You have a spec, requirements, or design for a multi-step implementation task.
  • You need to break work into bite-sized, executable tasks before touching code.
  • You are preparing work for builder agents or a parallel execution session.
  • You want a machine-parseable plan that can be fed to shipyard executors like /shipyard:build and /shipyard:status.
  • You want the plan documented in a Markdown file saved at docs/plans with a date stamp and feature name.

Quick Start

  1. Step 1: Announce at start: "I'm using the writing-plans skill to create the implementation plan."
  2. Step 2: Describe the feature or say 'write a plan' to trigger plan generation; include feature name and key constraints.
  3. Step 3: Review the generated plan, then save it to docs/plans/YYYY-MM-DD-<feature-name>.md and proceed with execution or handoff.

Best Practices

  • Start from a clear spec and assume the engineer has zero context about the codebase.
  • Use the Shipyard Plan Format with explicit create/modify/test file paths, steps, and verification.
  • Keep each task small (2-5 minutes) to maximize parallel progress and feedback loops.
  • Follow DRY, YAGNI, and TDD principles; write minimal enough implementation to satisfy tests.
  • Commit frequently with meaningful messages and document test coverage and acceptance criteria.

Example Use Cases

  • Example: Add a new REST endpoint by creating src/api/new_endpoint.py, updating routing in app.py, and adding tests in tests/test_new_endpoint.py; the plan details the exact files, tests, and verification commands.
  • Example: Split a flaky integration test into isolated unit tests with a dedicated test suite and environment setup steps, including test-first failures and expected pass criteria.
  • Example: Migrate configuration from JSON to YAML, listing file changes, migration tests, and rollback steps for safety, all documented in XML task blocks.
  • Example: Refactor the authentication service across modules, with a map of touched files, updated tests, and a verification suite to ensure behavior remains consistent.
  • Example: Plan a feature rollout that deprecates an old API, includes new API surface, test coverage, docs updates, and a rollout verification plan.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers