brewcode:spec
npx machina-cli add skill kochetkov-ma/claude-brewcode/spec --openclawInput Handling
| Input | Action |
|---|---|
$ARGUMENTS empty | Read .claude/TASK.md → first line = path → derive task dir |
$ARGUMENTS has text | Use as task description |
$ARGUMENTS has path | Read file as task description |
Flag Parsing
Parse $ARGUMENTS for flags BEFORE input detection:
| Flag | Effect |
|---|---|
-n, --noask | Skip all user questions, auto-approve defaults |
Strip flag from $ARGUMENTS. Remaining text = description or path.
Naming
- Timestamp:
YYYYMMDD_HHMMSS(e.g.,20260208_143052) - Name slug: lowercase, underscores, from description (e.g.,
auth_feature) - Task dir:
.claude/tasks/{TIMESTAMP}_{NAME}_task/
Workflow
-
Check Adapted Templates (REQUIRED FIRST)
EXECUTE using Bash tool:
test -f .claude/tasks/templates/SPEC.md.template && echo "SPEC.md.template" || echo "SPEC.md.template MISSING"STOP if MISSING — Run
/brewcode:setupfirst. -
Read & Analyze Input
- Parse
$ARGUMENTSper Input Handling table - Determine scope: files affected, areas of codebase
- Identify what needs clarification
- Parse
-
Clarifying Questions (AskUserQuestion)
If
--noask: Skip. Record in SPEC User Q&A: "Skipped (--noask mode)". Infer scope from description and codebase analysis.Otherwise: Use AskUserQuestion tool to ask 3-5 questions, grouped in batches of up to 4 per AskUserQuestion call. Focus on:
# Category Example Questions 1 Scope What's in/out? Which modules affected? 2 Constraints Required libraries? Backward compatibility? API contracts? 3 Edge cases / ambiguities Concurrent access? Empty/null inputs? Error recovery? Record all Q&A for the User Q&A section of SPEC.
2.5. Feature Splitting Check
After gathering requirements, evaluate scope:
IF requirements cover >3 independent areas OR estimated complexity >12 plan phases:
→ AskUserQuestion: "I suggest splitting into X tasks: [A], [B], [C]. Agree?"
→ If yes: create SPEC only for first task, record others in Notes section
→ If no: continue with full scope
-
Partition Research Areas (5-10 areas)
Analyze project and split into logical parts for parallel research:
| Area | Pattern | Agent | |------|---------|-------| | Controllers | **/controllers/ | developer | | Services | **/services/ | developer | | DB/Repos | **/repositories/ | developer | | Tests | **/test/ | tester | | Config | *.yml, docker-* | developer | | Docs | *.md, docs/ | Explore |See
references/SPEC-creation.mdfor detailed parallel research instructions. -
Parallel Research (ONE message, 5-10 agents)
ONE message with 5-10 Task calls in PARALLEL Task(subagent_type="Plan", prompt="Analyze architecture...") Task(subagent_type="developer", prompt="Analyze services...") Task(subagent_type="tester", prompt="Analyze test patterns...") Task(subagent_type="reviewer", prompt="Analyze quality...") Task(subagent_type="Explore", prompt="Find library docs...")Agent prompt template:
> **Context:** BC_PLUGIN_ROOT is available in your context (injected by pre-task.mjs hook). Analyze {AREA} for task: "{TASK_DESCRIPTION}" Focus: patterns, reusable code, risks, constraints Context files: {FILES_IN_AREA} Output: findings (bullets), assets (table), risks, recommendations NO large code blocks - use file:line references -
Consolidate into SPEC
- Create task directory:
.claude/tasks/{TIMESTAMP}_{NAME}_task/ - Read
.claude/tasks/templates/SPEC.md.template(project-adapted) - Merge agent findings (deduplicate)
- Fill SPEC sections per Consolidation Rules in
references/SPEC-creation.md - Write
.claude/tasks/{TIMESTAMP}_{NAME}_task/SPEC.md - Include Research table with per-agent findings
- Create task directory:
-
Present Key Findings (AskUserQuestion)
If
--noask: Skip validation. Auto-approve all findings.Otherwise: Use AskUserQuestion to validate with user:
- Key architectural decisions made
- Risk assessment and proposed mitigations
- Any assumptions that need confirmation
- Completeness check: "Does this cover everything?"
Incorporate user feedback into SPEC.
-
Review SPEC (reviewer agent + fix loop)
Task(subagent_type="reviewer", prompt="> **Context:** BC_PLUGIN_ROOT is available in your context (injected by pre-task.mjs hook). Review SPEC at {SPEC_PATH} Check: completeness, consistency, feasibility, risks Output: list of remarks with severity (critical/major/minor), specific fixes")Iteration loop:
WHILE remarks.critical > 0 OR remarks.major > 0: 1. Fix all critical/major remarks in SPEC.md 2. Re-run reviewer MAX 3 iterations. After 3 rounds, present remaining remarks to user via AskUserQuestion.Exit criteria: No critical/major remarks remaining OR 3 iterations exhausted
Template source: Always from
.claude/tasks/templates/(project), never from plugin base templates directly.
Output
# Spec Created
## Detection
| Field | Value |
|-------|-------|
| Arguments | `{received args}` |
| Input Type | `{text description or file path}` |
| Noask | `{yes or no}` |
## Files Created
- SPEC: .claude/tasks/{TIMESTAMP}_{NAME}_task/SPEC.md
- Task Dir: .claude/tasks/{TIMESTAMP}_{NAME}_task/
## Next Step
Run: /brewcode:plan .claude/tasks/{TIMESTAMP}_{NAME}_task/
</instructions>Source
git clone https://github.com/kochetkov-ma/claude-brewcode/blob/main/brewcode/skills/spec/SKILL.mdView on GitHub Overview
brewcode:spec creates a detailed task specification by researching the project and engaging with the user. It analyzes input, asks targeted clarifications (unless --noask is used), and partitions work into research areas before consolidating findings into a SPEC.md in the task directory.
How This Skill Works
It starts by validating the SPEC template, reads input or a provided description, and then gathers requirements either through clarifying questions or auto-approval with --noask. It then partitions the project into research areas for parallel analysis and consolidates findings into a single SPEC stored under the task directory.
When to Use It
- Starting a new feature task from a description with unclear scope
- Input is ambiguous or touches multiple modules
- You want to auto-approve requirements using --noask
- Planning large refactors that require phased tasks
- You need a reproducible SPEC.md saved in the task directory
Quick Start
- Step 1: Check for SPEC template: test -f .claude/tasks/templates/SPEC.md.template && echo "SPEC.md.template" || echo "SPEC.md.template MISSING"
- Step 2: Read and analyze input (ARGUMENTS or .claude/TASK.md) to determine scope
- Step 3: If not using --noask, run AskUserQuestion to generate 3-5 questions and record Q&A; then partition research areas and consolidate into SPEC
Best Practices
- Verify that SPEC.md.template exists before proceeding
- Parse flags (-n/--noask) before analyzing input
- Record all user Q&A in the SPEC's User Q&A section
- Split scope into independent areas to enable parallel research
- Create the task directory .claude/tasks/{TIMESTAMP}_{NAME}_task/ and attach SPEC
Example Use Cases
- Create SPEC for new authentication feature with 2FA
- Draft SPEC for a billing service migration across modules
- Specify a reporting dashboard spanning multiple data sources
- Consolidate SPEC for fixing a race condition in a concurrent worker
- SPEC for upgrading core dependencies across the repo