launchpad
Scannednpx machina-cli add skill Geono/claude-launchpad/launchpad --openclawLaunchpad
End-to-end orchestrator that chains launchpad-spec → launchpad-plan → launchpad-run into a single guided workflow. Ensures every phase completes before the next begins, and asks the user the right questions at every step.
Commands
/launchpad <feature description> # Start full pipeline from scratch
/launchpad status # Show current pipeline stage and progress
/launchpad resume # Resume from where you left off
Pipeline Stages
STAGE 1: SPEC STAGE 2: PLAN STAGE 3: RUN
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ /lp:spec │ │ Dependency │ │ /lp:run │
│ ↓ │ │ analysis │ │ ↓ │
│ /lp:refine │──────→│ ↓ │──────→│ Dispatch │
│ ↓ │ │ Wave assign │ │ sub-agents │
│ /lp:clarify │ │ ↓ │ │ ↓ │
│ ↓ │ │ harness- │ │ Validate & │
│ /lp:tasks │ │ tasks.json │ │ merge │
└──────────────┘ └──────────────┘ └──────────────┘
User answers Automatic Autonomous
questions here (with review) (with recovery)
Stage 1: Spec (uses launchpad-spec skill)
Entry
When the user triggers /launchpad <description>:
- Announce stage: "Stage 1/3: Spec — Defining requirements clearly."
- Invoke
/lp:specwith the user's description - Track state: Write
.launchpad.jsonto project root (see State File below)
Iteration Loop
After /lp:spec creates the spec with questions:
- Present questions to the user
- Wait for answers → invoke
/lp:clarifywith user's response - If more questions arise → repeat
- If spec needs research → suggest and invoke
/lp:refine - Gate check: Only proceed when "Open Questions" section is empty
Gate 1 → 2
When all questions are resolved:
- Invoke
/lp:tasksto generate the task breakdown - Present task list to the user
- Ask: "Please review the task list. Let me know if anything is missing or needs changes. If it looks good, we'll proceed to the next stage."
- If user requests changes → edit task file → re-present
- If user approves → proceed to Stage 2
Stage 2: Plan (uses launchpad-plan skill)
Entry
- Announce stage: "Stage 2/3: Plan — Analyzing task dependencies and generating a parallel execution plan."
- Invoke
/lp:planon the task file from Stage 1
User Review Points
The plan skill produces the parallelization plan. Present to the user:
- Wave summary: "Total N tasks organized into M waves."
- Wave detail table:
Wave 0 (parallel): task-001, task-002, task-004 Wave 1 (parallel): task-003, task-005 Wave 2 (sequential): task-006 - Conflict notes: Any file overlap warnings
- Ask: "Please review the execution plan. Let me know if you'd like to adjust the wave assignments or dependencies."
Gate 2 → 3
When user approves the plan:
- Verify
harness-tasks.jsonhas been written - Verify all tasks have
context(sub-agent prompt content) - Verify all tasks have
files_hint(batch conflict detection) - Verify all tasks have
validation.command— if any are missing, ask the user: "Task task-003 'Add OAuth providers' is missing a validation command. What command should be used? (e.g.,npm test -- --testPathPattern=oauth)" - Ask: "Everything is ready. Shall I start execution?"
Stage 3: Run (uses launchpad-run skill)
Entry
- Announce stage: "Stage 3/3: Run — Sub-agents are executing tasks in parallel."
- Invoke
/lp:run
During Execution
The run skill handles autonomous execution. The pipeline skill adds:
- Batch progress reports: After each batch completes, summarize:
Batch 1 complete: task-001 ✓, task-002 ✓, task-004 ✗ (TEST_FAIL) Next batch: task-003, task-005 - Failure triage: When a task fails, present the error and ask:
"task-004 failed: [error details]. Should I auto-retry, or would you like to modify the task?"
- Retry → let run skill retry automatically
- Modify → edit task context/validation, then re-dispatch
Completion
When all tasks are done:
- Final summary:
Pipeline complete! - Spec: specs/feature-name.md (COMPLETED) - Tasks: 8/8 completed, 0 failed - Sessions: 2, Batches: 4 - Total sub-agent dispatches: 10 (2 retries) - Suggest next steps:
- "Let me know if you'd like a code review."
- "Shall I create a PR?"
State File: .launchpad.json
Tracks which stage the pipeline is in, for session resume:
{
"feature": "auth-system",
"stage": "spec",
"spec_file": "specs/auth-system.md",
"task_file": "specs/auth-system.tasks.md",
"harness_file": "harness-tasks.json",
"stage_history": [
{ "stage": "spec", "status": "completed", "timestamp": "2025-07-01T10:00:00Z" },
{ "stage": "plan", "status": "in_progress", "timestamp": "2025-07-01T10:30:00Z" }
]
}
/launchpad status Command
Read .launchpad.json and report:
Launchpad: auth-system
├─ Stage 1 (Spec): ✓ completed — specs/auth-system.md
├─ Stage 2 (Plan): ● in progress — 5 tasks, 3 waves
└─ Stage 3 (Run): ○ pending
Use /launchpad resume to continue.
/launchpad resume Command
- Read
.launchpad.json - Determine current stage
- Resume from the appropriate point:
- Stage 1: Re-read spec, check for open questions, continue iteration
- Stage 2: Re-read task file, re-run plan if needed
- Stage 3: Invoke
/lp:run(run skill has its own session recovery)
Error Handling
| Situation | Action |
|---|---|
User says /launchpad without description | Ask: "What feature would you like to build?" |
| Spec has unresolved questions at gate | Block: "There are still N open questions. Please answer them with /lp:clarify." |
| Task has no validation command | Ask user for the command before proceeding |
Task has no context | Generate from spec + task title, present for user review |
Task has no files_hint | Ask: "Which files will this task modify?" |
| Run fails completely | Present error, offer to re-run plan with adjusted plan |
| Context window approaching limit | Save state to .launchpad.json, suggest /launchpad resume in new session |
Key Rules
- Never skip a stage. Even if the user says "just run it", walk through all three stages.
- Always get user confirmation at gates. The two gate checkpoints (spec→plan, plan→run) require explicit user approval.
- Fill gaps by asking, not guessing. If any required field is missing (validation command, files_hint, context), ask the user rather than making assumptions.
- One feature per pipeline. If the user wants to build multiple features, run separate pipelines (or suggest splitting into one spec with multiple tasks).
- State survives sessions. Always read/write
.launchpad.jsonso the pipeline can resume after context reset.
Source
git clone https://github.com/Geono/claude-launchpad/blob/main/skills/launchpad/SKILL.mdView on GitHub Overview
Launchpad is an end-to-end orchestrator that chains launchpad-spec, launchpad-plan, and launchpad-run into a single guided workflow. It ensures every phase completes before the next and asks the right questions at each step to clarify requirements, plan dependencies, and execute with multi-agent sub-skills.
How This Skill Works
Triggering /launchpad starts Stage 1 (Spec). Launchpad invokes the three sub-skills in order: spec, plan, and run, while persisting state to .launchpad.json. Progress is gated by user answers and plan validation, and execution proceeds only after the plan is approved, enabling multi-agent dispatch in Stage 3.
When to Use It
- Starting a new feature by providing a description to guide spec, planning, and execution.
- Monitoring progress with /launchpad status to see the current stage and progress.
- Resuming a paused or interrupted pipeline via /launchpad resume.
- Reviewing and adjusting the task breakdown or plan before execution.
- Running multi-agent execution to dispatch sub-tasks and validate results.
Quick Start
- Step 1: /launchpad <feature description> to start the full pipeline.
- Step 2: Answer Stage 1 questions and approve when ready to proceed.
- Step 3: Monitor Stage 2/3 progress with /launchpad status or let Launchpad run automatically.
Best Practices
- Provide a clear feature description to optimize Stage 1 questions.
- Keep a .launchpad.json state file at the project root.
- Review the wave-based plan details (Wave 0/1/…) before execution.
- Ensure each task has context, files_hint, and a validation.command before Stage 3.
- Use /launchpad resume or edits to recover from interruptions.
Example Use Cases
- I want to build a user onboarding feature.
- Add OAuth providers for social sign-in.
- Implement a checkout workflow with parallel data fetches.
- Refactor the data access layer and migrate schemas.
- Migrate to a new API version with staged rollout.