Get the FREE Ultimate OpenClaw Setup Guide →

kickoff

Scanned
npx machina-cli add skill pcatattacks/solopreneur-plugin/kickoff --openclaw
Files (1)
SKILL.md
5.1 KB

Kickoff: $ARGUMENTS

You are the team lead for a collaborative agent team meeting. Unlike skills that delegate to independent subagents, kickoff uses Claude Code's agent teams feature — teammates share a task list, message each other directly, and challenge each other's findings.

Phase 0 — Team Selection

  1. Parse $ARGUMENTS to identify the team:

    • Named team match: If arguments mention a team name from the "Team Meetings" section in CLAUDE.md (e.g., "Discovery Sprint", "Build & QA", "Ship & Launch"), use that team's members.
    • Agent mentions: If arguments include @agent references (e.g., @engineer @qa on [topic]), assemble an ad-hoc team with those agents.
    • Topic-based inference: If arguments describe a task without naming a team, infer the best fit:
      • Research, exploration, idea validation → Discovery Sprint
      • Code review, debugging, architecture → Build & QA
      • Launch prep, deployment, announcements → Ship & Launch
    • Ambiguous: Propose your best-fit team and let the CEO confirm or adjust.
  2. If $ARGUMENTS includes a file path or reference, read it for context to pass to teammates.

  3. Present to the CEO via AskUserQuestion:

    I'll assemble [team name] to work on [topic]:
    - @[agent1]: [their focus for this meeting]
    - @[agent2]: [their focus for this meeting]
    - @[agent3]: [their focus for this meeting]
    
    They'll collaborate — sharing findings, challenging assumptions, and
    converging on a recommendation.
    
    Note: This assembles your full team for collaborative discussion —
    takes longer and uses more resources, but produces deeper analysis.
    

    Options: "Yes, start the meeting" / "Adjust the team" / "Use [lifecycle skill] instead" (suggest the faster alternative — e.g., /discover for research, /review for code review)

Phase 1 — Spawn Agent Team

On CEO approval, create the agent team:

Teammate setup — for each team member:

  1. Read the agent's role file (agents/[agent].md) for their system prompt and capabilities.

  2. Write a spawn prompt that includes:

    • The agent's role description and expertise
    • The topic and any file context (spec content, code to review, bug report, etc.)
    • Collaborative instructions: "You are in a team meeting with [other teammates]. Share your findings with them. Challenge assumptions you disagree with. Build on their insights. Your goal is to converge on the best recommendation as a team, not just produce your own independent analysis."
  3. Use Sonnet for teammates by default (balances capability and cost).

Task list — create a shared task list based on the meeting's purpose. Aim for 5-6 tasks per teammate. Examples:

For a Discovery Sprint:

  • Research competitive landscape and existing solutions
  • Assess market size, pricing opportunities, and unit economics
  • Evaluate technical feasibility and architecture approach
  • Identify top risks and potential blockers
  • Challenge each other's findings and draft consolidated recommendation

For an adversarial code review:

  • Review for security vulnerabilities and data exposure
  • Review for performance issues and scalability
  • Review for edge cases and error handling
  • Challenge each other's findings — debate severity and impact
  • Draft prioritized findings with consensus severity ratings

Monitoring — while teammates collaborate:

  • Wait for all teammates to finish before compiling results (do not start implementing or writing the summary yourself)
  • Intervene only if a teammate goes off-topic, appears stuck, or the CEO sends a message
  • If a task appears stuck (teammates sometimes fail to mark tasks complete), nudge the teammate

Phase 2 — Compile & Present

When all teammates have finished:

  1. Compile findings into a structured report:

    ## Kickoff Report: [topic]
    
    ### Consensus
    [What the team agreed on — the strongest, most defensible conclusions]
    
    ### Debate Points
    [Where agents disagreed, with each side's reasoning. Highlight which
    argument was stronger and why]
    
    ### Key Findings
    - **@[agent1]**: [Top insight from their perspective]
    - **@[agent2]**: [Top insight from their perspective]
    - **@[agent3]**: [Top insight from their perspective]
    
    ### Recommendation
    [The team's collective recommendation — what the CEO should do next]
    
    ### Dissent
    [If any agent strongly disagrees with the recommendation, note it here
    with their reasoning. The CEO deserves to see minority opinions.]
    
  2. Clean up the team.

  3. Suggest the next step based on what was discussed:

    • Discovery kickoff → /solopreneur:spec
    • Code review kickoff → fix issues or /solopreneur:ship
    • Debug kickoff → implement the fix
    • Launch kickoff → /solopreneur:ship
    • Ad-hoc → suggest the most relevant next action

Source

git clone https://github.com/pcatattacks/solopreneur-plugin/blob/main/skills/kickoff/SKILL.mdView on GitHub

Overview

Kickoff orchestrates a collaborative, team-based meeting using Claude Code’s agent teams. It enables shared task lists, direct teammate dialogue, and adversarial critique to produce deeper analysis and a converged recommendation.

How This Skill Works

Phase 0 selects the team by parsing $ARGUMENTS: named team from CLAUDE.md, @agent mentions, or topic-based inference (Discovery Sprint for research, Build & QA for code review, Ship & Launch for launch prep). If a file path is present, context is read for teammates. Phase 1 spawns the agent team: read each agent’s role file (agents/[agent].md), write a tailored spawn prompt with role, topic, and file context, and set up collaborative instructions. Use Sonnet by default, and create a 5–6 item task list per teammate. Teammates share findings, challenge assumptions, and converge on a recommendation, with monitoring to keep the discussion focused and on track.

When to Use It

  • You need deep multi-perspective analysis on a complex topic.
  • You want adversarial review or critique of a plan or design.
  • You’re debugging with competing hypotheses and need debate.
  • You’re preparing a launch plan and want cross-team input before committing.
  • You want a structured, convergent recommendation from a team of experts.

Quick Start

  1. Step 1: Define the task and team in $ARGUMENTS (e.g., 'Discovery Sprint on pricing with [team]').
  2. Step 2: Present to the CEO and choose 'Yes, start the meeting', 'Adjust the team', or 'Use [lifecycle skill] instead'.
  3. Step 3: On approval, spawn the agent team, load role prompts from agents/[agent].md, and build 5–6 tasks per teammate.

Best Practices

  • Clearly specify the topic and intended outcomes in the initial arguments.
  • Read each teammate’s role file (agents/[agent].md) to tailor prompts.
  • Ask teammates to explicitly challenge assumptions and propose alternatives.
  • Maintain a 5–6 task list per teammate to balance depth and throughput.
  • Monitor progress and intervene when a teammate goes off-topic or stalls.

Example Use Cases

  • Discovery Sprint: compare competitors, market size, and architecture feasibility.
  • Adversarial code review: surface security, performance, and edge cases with debate.
  • Architecture alignment: explore service boundaries and data flows with opposing views.
  • Launch readiness: validate deployment steps, risk, and communications plan.
  • Product-market fit: test hypotheses with diverse perspectives and converge on recommendations.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers