omakase-off
npx machina-cli add skill aiskillstore/marketplace/omakase-off --openclawOmakase-Off
Chef's choice exploration - when you're not sure WHAT to build, explore different approaches in parallel.
Part of Test Kitchen Development:
omakase-off- Chef's choice exploration (different approaches/plans)cookoff- Same recipe, multiple cooks compete (same plan, multiple implementations)
Core principle: Let indecision emerge naturally during brainstorming, then implement multiple approaches in parallel to let real code + tests determine the best solution.
Three Triggers
Trigger 1: BEFORE Brainstorming
When: "I want to build...", "Create a...", "Implement...", "Add a feature..."
Present:
Before we brainstorm the details, would you like to:
1. Brainstorm together - We'll explore requirements and design step by step
2. Omakase (chef's choice) - I'll generate 3-5 best approaches, implement them
in parallel, and let tests pick the winner
Trigger 2: DURING Brainstorming (Indecision Detection)
Detection signals:
- 2+ uncertain responses in a row on architectural decisions
- Phrases: "not sure", "don't know", "either works", "you pick", "no preference"
When detected:
You seem flexible on the approach. Would you like to:
1. I'll pick what seems best and continue brainstorming
2. Explore multiple approaches in parallel (omakase-off)
Trigger 3: Explicitly Requested
- "try both approaches", "explore both", "omakase"
- "implement both variants", "let's see which is better"
Workflow Overview
| Phase | Description |
|---|---|
| 0. Entry | Present brainstorm vs omakase choice |
| 1. Brainstorm | Passive slot detection during design |
| 1.5. Decision | If slots detected, offer parallel exploration |
| 2. Plan | Generate implementation plan per variant |
| 3. Implement | Dispatch ALL agents in SINGLE message |
| 4. Evaluate | Scenario tests → fresh-eyes → judge survivors |
| 5. Complete | Finish winner, cleanup losers |
See references/detailed-workflow.md for full phase details.
Directory Structure
docs/plans/<feature>/
design.md # Shared context from brainstorming
omakase/
variant-<slug>/
plan.md # Implementation plan for this variant
result.md # Final report
.worktrees/
variant-<slug>/ # Omakase variant worktree
Slot Classification
| Type | Examples | Worth exploring? |
|---|---|---|
| Architectural | Storage engine, framework, auth method | Yes |
| Trivial | File location, naming, config format | No |
Only architectural decisions become slots for parallel exploration.
Variant Limits
Max 5-6 implementations. Don't do full combinatorial explosion:
- Identify the primary axis (biggest architectural impact)
- Create variants along that axis
- Fill secondary slots with natural pairings
Critical Rules
- Dispatch ALL variants in SINGLE message - Multiple Task tools, one message
- MUST use scenario-testing - Not manual verification
- Fresh-eyes on survivors - Required before judge comparison
- Always cleanup losers - Remove worktrees and branches
- Write result.md - Document what was tried and why winner won
Skills Orchestrated
| Dependency | Usage |
|---|---|
brainstorming | Modified flow with passive slot detection |
writing-plans | Generate implementation plan per variant |
git-worktrees | Create isolated worktree per variant |
parallel-agents | Dispatch all variant subagents in parallel |
scenario-testing | Run same scenarios against all variants |
fresh-eyes | Quality review on survivors → input for judge |
finish-branch | Handle winner (merge/PR), cleanup losers |
Example Flow
User: "I need to build a CLI todo app."
Claude: [Triggers omakase-off]
Before we dive in, how would you like to approach this?
1. Brainstorm together
2. Omakase (chef's choice)
User: "1"
Claude: [Brainstorming proceeds, detects indecision on storage]
You seem flexible on storage (JSON vs SQLite). Would you like to:
1. Explore in parallel - I'll implement both variants
2. Best guess - I'll pick JSON (simpler)
User: "1"
[Creates plans for variant-json, variant-sqlite]
[Dispatches parallel agents in SINGLE message]
[Runs scenario tests on both]
[Fresh-eyes review on survivors]
[Presents comparison, user picks winner]
[Cleans up loser, finishes winner branch]
Source
git clone https://github.com/aiskillstore/marketplace/blob/main/skills/2389-research/test-kitchen/omakase-off/SKILL.mdView on GitHub Overview
Omakase-off serves as the entry gate for build/create/implement requests. It offers a choice between brainstorm collaboration and chef's-choice parallel exploration, and will automatically detect indecision to trigger parallel exploration. The goal is to let tests and real code determine the best solution by running multiple approaches in parallel.
How This Skill Works
Start by presenting Brainstorming vs Omakase options before design. If indecision is detected during brainstorming, offer parallel exploration (omakase-off). When explicitly requested or detected, generate 3-5 variant plans and dispatch all implementations in a single message, then evaluate with scenario tests to pick a winner and cleanup losers.
When to Use It
- You want to build something but aren’t sure which architectural approach is best.
- You’re starting a feature and want to explore multiple design options in parallel.
- During brainstorming you detect indecision and want to trigger parallel exploration.
- You explicitly request to try multiple approaches (e.g., 'try both' or 'omakase').
- You need a winner-determined implementation by running tests across multiple variants.
Quick Start
- Step 1: Offer Brainstorming vs Omakase options at the entry point.
- Step 2: If indecision is detected or requested, generate 3-5 variant plans and implement them in parallel.
- Step 3: Dispatch all variants in a single message, run scenario tests, select the winner, and cleanup losers.
Best Practices
- Present Brainstorming vs Omakase options at the outset (Trigger 1).
- Monitor for indecision signals and offer parallel exploration (Trigger 2).
- Limit to 5-6 variant implementations to avoid explosion (Variant Limits).
- Dispatch ALL variants in a SINGLE message for cohesive evaluation (Critical Rule 1).
- Document results with result.md and clean up losers after the winner is selected (Critical Rule 5).
Example Use Cases
- A new feature where the best storage engine is unclear; run 3 variants in parallel and test outcomes.
- Choosing between auth methods (token vs session) and evaluating performance and security implications.
- Deciding between monolith vs microservice architecture for a new module and comparing integration complexity.
- Comparing front-end state management options (Redux vs Context API) with equivalent feature sets.
- Exploring multiple implementation variants for a data pipeline to identify the most reliable path.