dev-buddy-once
npx machina-cli add skill Z-M-Huang/vcp/dev-buddy-once --openclawOne-Shot Task Runner
Run a single arbitrary task using any configured AI provider — no pipeline, no reviews, no orchestration.
Usage: /dev-buddy-once use <provider> [model <model>] <task description>
Examples:
/dev-buddy-once use minimax M2.5 model to help me design a new UI
/dev-buddy-once use codex o3 model to refactor the auth module
/dev-buddy-once use anthropic-subscription sonnet model to explain the codebase
/dev-buddy-once use minimax to analyze performance bottlenecks
Step 1: Parse Arguments
Extract three pieces from the user's message:
- Provider name — the preset name (e.g., "minimax", "codex", "anthropic-subscription")
- Model — the model identifier (e.g., "M2.5", "o3", "sonnet")
- Task — everything else (the actual work to do)
The user's message follows the skill trigger. Common patterns:
use <provider> <model> model to <task>use <provider> model <model> to <task>use <provider> to <task>(model omitted — will default)
Step 2: Resolve Preset
List available presets:
bun -e "
import { readPresets } from '${CLAUDE_PLUGIN_ROOT}/scripts/preset-utils.ts';
const presets = readPresets();
console.log(JSON.stringify(Object.entries(presets.presets).map(([k,v]) => ({
name: k, type: v.type, models: v.models || ['haiku','sonnet','opus']
}))));
"
Match the provider name deterministically:
- Exact match (case-insensitive) → use it
- Unique prefix match (case-insensitive) → use it (e.g., "mini" matches "MiniMax-API" if no other preset starts with "mini")
- Multiple matches → use AskUserQuestion listing the matches, ask user to pick
- No matches → report error with available preset names
Validate model:
- If user specified a model, check it exists in the preset's
models[]list - If user did NOT specify a model:
- Subscription presets: default to
sonnet - API/CLI presets: default to
preset.models[0]
- Subscription presets: default to
- For subscription presets (no
models[]), valid models are:haiku,sonnet,opus
Step 3: Route by Provider Type
Read the matched preset's type field and route accordingly:
Subscription (type: "subscription")
Use the Task tool directly — no external process needed:
Task(
subagent_type: "general-purpose",
model: "<model>",
prompt: "<task>"
)
The subagent works in the project directory with full tool access. Report its output to the user when it completes.
API (type: "api")
Run the one-shot runner script:
bun "${CLAUDE_PLUGIN_ROOT}/scripts/one-shot-runner.ts" \
--type api \
--preset "<exact_preset_name>" \
--model "<model>" \
--cwd "${CLAUDE_PROJECT_DIR}" \
--task-stdin <<'TASK_EOF'
<task_text>
TASK_EOF
Uses --task-stdin with heredoc to avoid OS argv size limits and ps exposure.
Derive timeout: Read ~/.vcp/ai-presets.json → find the preset by name → read timeout_ms (default: 300000 if not set or lookup fails).
IMPORTANT: The Bash tool has a hard max timeout of 600,000ms (10 min). API tasks can run much longer (e.g., 30 min). Always use run_in_background: true to prevent the Bash tool from killing the process prematurely.
After launching:
- Save the returned
task_idfrom the Bash tool. Ifrun_in_backgrounddoes not return atask_id, report a dispatch failure to the user — do not retry in foreground mode. - CRITICAL — Poll with the correct timeout (NOT the default 30s):
For a 5-min preset (default 300000ms), this =TaskOutput(task_id: "<task_id>", block: true, timeout: min(timeout_ms + 120000, 600000))min(420000, 600000)= 420000ms (7 min). The default TaskOutput timeout is only 30s — far too short for API tasks that typically take 2-5 min. - If TaskOutput returns but the task is still running (not complete), repeat
TaskOutputwithtimeout: 600000until done.
The script:
- Spawns
api-task-runner.tswith the preset, model, and task (via stdin) - The task runner creates a V2 Agent SDK session, runs the task, and exits
- Outputs a JSON event to stdout
CLI (type: "cli")
Prerequisite: The CLI preset must have a one_shot_args_template configured. This template uses only {model}, {prompt}, and {reasoning_effort} placeholders (no {output_file} or {schema_path} — those are pipeline-only).
If the preset does not have one_shot_args_template, report the error to the user and suggest configuring it via /dev-buddy-config or /dev-buddy-manage-presets.
Run the one-shot runner script:
bun "${CLAUDE_PLUGIN_ROOT}/scripts/one-shot-runner.ts" \
--type cli \
--preset "<exact_preset_name>" \
--model "<model>" \
--cwd "${CLAUDE_PROJECT_DIR}" \
--task-stdin <<'TASK_EOF'
<task_text>
TASK_EOF
Derive timeout: Read ~/.vcp/ai-presets.json → find the preset by name → read timeout_ms (default: 1200000 for CLI if not set or lookup fails — matches the script's 20-minute default).
IMPORTANT: The Bash tool has a hard max timeout of 600,000ms (10 min). CLI tasks can run much longer (e.g., Codex with 20-min timeout). Always use run_in_background: true to prevent the Bash tool from killing the process prematurely.
After launching:
- Save the returned
task_idfrom the Bash tool. Ifrun_in_backgrounddoes not return atask_id, report a dispatch failure to the user — do not retry in foreground mode. - CRITICAL — Poll with the correct timeout (NOT the default 30s):
For a 20-min CLI preset (default 1200000ms), this =TaskOutput(task_id: "<task_id>", block: true, timeout: min(timeout_ms + 120000, 600000))min(1320000, 600000)= 600000ms (10 min). The default TaskOutput timeout is only 30s — far too short for CLI tasks that typically take 5-20 min. - If TaskOutput returns but the task is still running (not complete), repeat
TaskOutputwithtimeout: 600000until done.
The CLI tool runs directly in the project directory (e.g., Codex exec --full-auto). Its output streams to the terminal.
Step 4: Report Results
After the task completes, read the script's stdout JSON output:
Success (exit code 0)
{"event": "complete", "provider": "minimax", "model": "M2.5", "result": "..."}
Report: provider, model used, and a summary of what was accomplished.
Validation Error (exit code 1)
{"event": "error", "phase": "validation", "error": "..."}
Report the validation error (wrong model, missing preset, bad template).
Execution Error (exit code 2)
{"event": "error", "phase": "api_execution|cli_execution", "error": "..."}
Report what went wrong (session failure, CLI not installed, auth error).
Timeout (exit code 3)
{"event": "error", "phase": "api_execution|cli_execution", "error": "..."}
Report that the task timed out. Suggest increasing timeout_ms on the preset via /dev-buddy-manage-presets.
Error Handling
| Scenario | Action |
|---|---|
| Preset not found | List available presets, suggest /dev-buddy-manage-presets list |
| Model not in preset | List preset's available models |
| Multiple preset matches | AskUserQuestion with matching names |
CLI preset missing one_shot_args_template | Report error, suggest /dev-buddy-config or /dev-buddy-manage-presets to add it |
| CLI tool not installed | Report error, suggest installing the tool |
| API task runner fails | Report error from script output |
| Task times out | Report timeout, suggest increasing timeout_ms |
Anti-Patterns
- Do NOT run a full pipeline — this is a single one-shot task
- Do NOT create pipeline tasks (TaskCreate/TaskUpdate) — no orchestration needed
- Do NOT skip the preset resolution step — always validate provider and model first
- Do NOT guess the preset type — always read it from the presets file
- For subscription: do NOT run the one-shot-runner.ts script — use Task tool directly
- Do NOT fall back to foreground Bash when background TaskOutput returns empty — the task is likely still running. Increase the TaskOutput timeout instead.
- Do NOT retry the same API/CLI task in foreground mode — the Bash tool's 2-minute default timeout is always shorter than the typical task duration (2-5 min for API, 5-20 min for CLI). Foreground mode will always kill the process prematurely.
- Do NOT use the default TaskOutput timeout (30s) for API/CLI tasks — always pass the computed timeout as specified in the polling instructions above.
Source
git clone https://github.com/Z-M-Huang/vcp/blob/main/plugins/dev-buddy/skills/dev-buddy-once/SKILL.mdView on GitHub Overview
dev-buddy-once runs a single arbitrary task against a configured AI provider and model, with no pipeline or orchestration. It supports subscription, API, and CLI presets, enabling quick one-off experiments or targeted evaluations. Use /dev-buddy-once use <provider> [model <model>] <task description> to specify your setup and task.
How This Skill Works
The skill parses the command to extract the provider, optional model, and task. It then resolves a preset from the available presets, validating or defaulting the model based on the preset type. Finally, it routes execution by provider type: subscription presets run via the Task tool directly, while API/CLI presets invoke the one-shot runner script with proper timeout handling and background execution, deriving timeout from ~/.vcp/ai-presets.json and polling TaskOutput for results.
When to Use It
- When you need to run a single task against a specific provider and model without any pipelines or reviews.
- When testing or prototyping a code/refactor task using a chosen provider (e.g., /dev-buddy-once use codex o3 model to refactor the auth module).
- When evaluating a new provider with a quick, one-off task to compare outputs.
- When tasks are long-running and you want to run in API/CLI mode with background execution and proper timeouts.
- When you want a minimal, deterministic path to obtain results for a single, isolated task.
Quick Start
- Step 1: Run /dev-buddy-once use <provider> [model <model>] <task description> to start a one-shot task.
- Step 2: The tool resolves the matching preset, validates or defaults the model, and selects the correct execution path.
- Step 3: Retrieve results from the runner (Task output) once the task completes.
Best Practices
- Always specify a provider and model when possible to use a deterministic preset.
- If using a subscription preset, prefer it for quick, short-running tasks.
- For API/CLI tasks, ensure the model exists in the preset's models list and monitor timeouts carefully.
- Write clear, specific task descriptions to reduce output ambiguity from the AI.
- After starting a task, retrieve results via TaskOutput with the appropriate timeout to avoid premature failures.
Example Use Cases
- /dev-buddy-once use minimax M2.5 model to help me design a new UI
- /dev-buddy-once use codex o3 model to refactor the auth module
- /dev-buddy-once use anthropic-subscription sonnet model to explain the codebase
- /dev-buddy-once use minimax to analyze performance bottlenecks
- /dev-buddy-once use gpt-4o model to summarize a design document