Get the FREE Ultimate OpenClaw Setup Guide →

bridge-claude

Scanned
npx machina-cli add skill mikeng-io/agent-skills/bridge-claude --openclaw
Files (1)
SKILL.md
11.5 KB

Bridge: Claude (Anthropic) Adapter

This file is a REFERENCE DOCUMENT. Any orchestrating skill reads it via the Read tool and embeds its instructions directly into Task agent prompts. It is not invoked as a standalone skill — it is a reusable set of instructions for Anthropic model dispatch.

Input schema, agent prompt template, output schema, verdict logic, timeout formula, artifact format, and status semantics are defined in bridge-commons/SKILL.md. This file covers only Claude-specific connection detection and execution paths.

Bridge Identity

bridge: claude
model_family: anthropic/claude
availability: conditional   # Depends on executor — not always available
connection_preference:
  1: native-dispatch  # Executor is Claude Code — Task tool / Agent Teams
  2: cli              # Any other executor — invoke `claude -p` CLI
  3: api              # Last resort — Anthropic HTTP API via ANTHROPIC_API_KEY
  4: skip             # None available — return SKIPPED (non-blocking)

Step 1: Pre-Flight — Connection Detection

Check A: Task Tool Available?

If the executor is Claude Code (or any agent with native Task tool access), this is the preferred path. No external process needed.

If Task tool available → use native-dispatch path (Steps 3A + 3B).


Check B: Claude Code CLI Installed?

which claude

If found → use CLI path (Step 3C). Any external executor (OpenCode, Codex, Gemini, custom agents) can invoke claude -p to get Claude's analysis without API keys.


Check C: Anthropic API Accessible?

echo ${ANTHROPIC_API_KEY:+found}

If ANTHROPIC_API_KEY is set → use API path (Step 3D).


Neither available → SKIPPED

{
  "bridge": "claude",
  "status": "SKIPPED",
  "skip_reason": "No Anthropic access — Task tool unavailable, claude CLI not found, ANTHROPIC_API_KEY not set",
  "outputs": []
}

Claude bridge is non-blocking when unavailable. SKIPPED is a valid outcome.


Step 2: Select Dispatch Mode

Claude bridge has two native-dispatch modes. Select based on task complexity and environment:

dispatch_mode:
  agent_teams:
    condition: "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 AND (domains >= 3 OR intensity = thorough)"
    description: "Teammates share a task list and communicate directly — best for complex multi-domain work"
    preferred: true
    caveat: "Env var is necessary but not sufficient — TeamCreate may still fail if this bridge
             is executing as a nested sub-agent (Task → Task). Always guard with a fallback."

  task_tool:
    condition: "Default — always available as fallback"
    description: "Sub-agents report results to parent — best for focused, independent tasks"
    preferred: false

Agent Teams vs Task Tool:

Task ToolAgent Teams
CommunicationSub-agents report to parent onlyTeammates message each other directly
CoordinationParent manages allShared task list, self-coordinating
Best forQuick focused tasksMulti-domain debate, complex analysis
AvailabilityAlwaysRequires env var AND top-level execution context

Depth note: Claude sub-agents can spawn their own Task sub-agents (Task → Task works). Agent Teams (TeamCreate) in a nested sub-agent is not guaranteed — the experimental feature may not be available at depth 2+. If this bridge is already running inside a Task agent dispatched by deep-council or another orchestrator, attempt Agent Teams but be prepared to fall back.


Step 3A: Execute via Agent Teams (preferred for complex tasks)

When CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 is set and task complexity warrants it (3+ domains or thorough intensity), attempt Agent Teams. Teammates share a task list and can message each other directly — enabling real-time debate between domain experts.

Failure guard — attempt before committing

Before executing the full flow, attempt TeamCreate. If it fails for any reason (not available in this execution context, naming collision, experimental feature restricted at current depth) → immediately fall back to Step 3B. Do not retry Agent Teams.

0. ATTEMPT TeamCreate → "bridge-claude-{session_id}"
   IF TeamCreate fails → go to Step 3B (Task Tool fallback)

1. TeamCreate succeeded → continue
2. Spawn teammates:
   - One per domain from bridge_input.domains
   - Devil's Advocate (always)
   - Integration Checker (always)
3. Create tasks in shared task list — one per teammate
4. Teammates self-coordinate: domain experts complete their analysis,
   Devil's Advocate challenges findings via direct messages,
   Integration Checker surfaces cross-component issues
5. Wait for all tasks complete
6. Synthesize via TaskList
7. TeamDelete → clean up
   IF any step 2–6 fails → call TeamDelete before returning SKIPPED

Teammates communicate findings and challenges directly without routing through the parent. This replaces the bridge-commons consolidation pass — debate-protocol provides a richer version.

Teammate prompts

Domain expert:

You are a {expert_role}. Your task: {task_description}
Scope: {scope} | Context: {context_summary} | Intensity: {intensity}
Focus: {focus_areas from domain-registry}
Return your output as the JSON structure defined in bridge-commons Agent Prompt Template.

Devil's Advocate:

You are a Devil's Advocate for this analysis session.
Scope: {scope} | Context: {context_summary} | Intensity: {intensity}

Your obligations (read debate-protocol/experts/devils-advocate.md for full protocol):
- MUST challenge every CRITICAL and HIGH finding not originated by you, via direct teammate messages
- SHOULD challenge MEDIUM findings when you detect a pattern across multiple findings
- Cross-domain synthesis: actively look for findings whose combination implies a new, higher-severity issue not caught by any single domain expert
- Pre-mortem focus: for each component in scope, ask "what would cause this to fail in production?"

Challenge quality standard: a valid challenge must either (a) identify a missing assumption, (b) propose an alternative explanation that lowers severity, or (c) surface a scenario where the finding does not apply.

Message domain expert teammates directly to challenge their findings. Do not wait for them to send to you first.

Return outputs JSON with domain: "cross-domain". Include both challenge outcomes (findings you successfully challenged/withdrew) and new findings you discovered.

Integration Checker:

You are an Integration Checker for this analysis session.
Scope: {scope} | Context: {context_summary} | Intensity: {intensity}

Focus areas (read debate-protocol/experts/integration-checker.md for full protocol):
- Interface mismatches: where does component A assume something about component B that isn't guaranteed?
- Undocumented contracts: implicit dependencies that work by accident, not by design
- Error propagation gaps: errors that one component produces but callers don't handle
- Timing and ordering dependencies: race conditions, initialization ordering, cascading failures
- Cross-cutting assumptions: things that must be true globally but are only enforced locally

For each finding from domain experts: does it have cross-component implications beyond its stated scope?
If yes, surface those as integration findings even if the original finding is withdrawn.

Return outputs JSON with domain: "integration".

Step 3B: Execute via Task Tool (fallback)

When Agent Teams is not available, spawn parallel Task sub-agents — one per domain + Devil's Advocate + Integration Checker. Sub-agents report results to parent only (no direct inter-agent communication). Build prompts using the Agent Prompt Template from bridge-commons.

Task 1: {domain_1} expert — focus: {focus_areas}, scope: {scope}
Task 2: {domain_2} expert — focus: {focus_areas}, scope: {scope}
...
Task N:   Devil's Advocate — challenge assumptions, find failure modes (domain: "cross-domain")
Task N+1: Integration Checker — cross-component impacts, implicit contracts (domain: "integration")

All tasks run in parallel. After all complete, run the bridge-commons Post-Analysis Protocol. For subsequent rounds, spawn new Task sub-agents with the context packet embedded in their prompts — the parent agent holds all state between rounds and manages the orchestrator synthesis step.


Step 3C: Execute via CLI (external executors)

When any non-Claude-Code executor can call the claude CLI:

timeout {final_timeout} claude -p "{constructed_prompt}" \
  --output-format json \
  --allowedTools "Read,Grep,Glob,Bash(ls *)"

Key flags:

FlagPurpose
-p "prompt"Prompt string — non-interactive mode
--output-format jsonStructured JSON output for parsing
--output-format stream-jsonStreaming JSON for real-time processing
--allowedToolsRestrict what Claude can do (read-only for analysis)
--continueResume the most recent session

For read-only analysis, scope tools to Read,Grep,Glob,Bash(ls *). For implementation tasks, use the full tool set.


Step 3D: Execute via Anthropic API (last resort)

# Discover latest model first — do not hardcode
CLAUDE_MODEL=$(curl -s -H "x-api-key: $ANTHROPIC_API_KEY" \
  https://api.anthropic.com/v1/models | python3 -c \
  "import sys,json; models=json.load(sys.stdin)['data']; print(models[0]['id'])")

curl -s -X POST https://api.anthropic.com/v1/messages \
  -H "x-api-key: $ANTHROPIC_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -H "Content-Type: application/json" \
  -d "{
    \"model\": \"$CLAUDE_MODEL\",
    \"max_tokens\": 8096,
    \"messages\": [{\"role\": \"user\", \"content\": \"{constructed_prompt}\"}]
  }"

Single API call covers all domains in one prompt. Less parallelism than native-dispatch paths.


Output

See bridge-commons Output Schema. Bridge-specific fields:

{
  "bridge": "claude",
  "model_family": "anthropic/claude",
  "connection_used": "native-dispatch | cli | api",
  "agents_spawned": 4
}

Output ID prefix: C (e.g., C001, C002).


Notes

  • Not always available — check Task tool access or ANTHROPIC_API_KEY before using
  • SKIPPED is non-blocking — if unavailable, other bridges continue
  • native-dispatch is preferred — richer parallel dispatch when running in Claude Code
  • Agent Teams replaces consolidation pass — when available, debate-protocol runs instead
  • Agent Teams guard is mandatoryTeamCreate can fail even when the env var is set (nested sub-agent context, depth limit, naming collision); always fall back to Task Tool on failure
  • Sub-agent recursion depth — Task → Task works reliably; Agent Teams inside a Task agent (Task → TeamCreate) is experimental and context-dependent; do not assume it's available at depth 2+
  • TeamDelete on failure — if any step between TeamCreate and synthesis fails, call TeamDelete before returning SKIPPED to avoid orphaned teams
  • API path works from any executor — fallback for non-Claude orchestrators
  • Task type drives framing — the same bridge handles review, planning, research, etc.

Source

git clone https://github.com/mikeng-io/agent-skills/blob/master/skills/bridge-claude/SKILL.mdView on GitHub

Overview

Bridge-claude is a reference adapter that enables any orchestrating skill to dispatch to Claude sub-agents or the Anthropic API. It is read through the Read tool and embedded into task prompts, providing Claude-specific connection detection and execution paths. It supports Claude Code, OpenCode, Codex, Gemini, or custom agents.

How This Skill Works

The bridge performs a pre-flight check to detect available dispatch paths: native Task tool access (Claude Code), the claude CLI, or an Anthropic API key. Based on what's available, it chooses native-dispatch, CLI, or API, and routes Claude requests accordingly. If none are available, it returns a non-blocking SKIPPED result with an explanatory output.

When to Use It

  • When the executor has native Claude Code access via the Task tool for fast, integrated dispatch.
  • When Claude Code CLI is installed and you want CLI-based Claude analysis without API keys.
  • When ANTHROPIC_API_KEY is set and you prefer the HTTP API path for Claude dispatch.
  • When running inside a nested sub-agent and you need a fallback path to Claude.
  • When you want a non-blocking SKIPPED outcome if no Claude access is available.

Quick Start

  1. Step 1: Run pre-flight checks to detect Task tool, claude CLI, or ANTHROPIC_API_KEY.
  2. Step 2: Choose a dispatch path (native-dispatch, CLI, or API) based on availability.
  3. Step 3: Dispatch to Claude sub-agents or via the API using the selected path.

Best Practices

  • Validate at startup that at least one path (Task tool, claude CLI, or API key) is accessible before prompts are generated.
  • Prefer agent_teams for multi-domain, collaborative work; fall back to task_tool for focused tasks.
  • Guard deeply nested deployments and be prepared to fall back if Agent Teams may not be available.
  • Treat SKIPPED as a non-blocking, prompt-safe outcome and provide a clear fallback message.
  • Document and test each path (native, CLI, API) in staging to ensure prompt templates align with the chosen path.

Example Use Cases

  • Orchestrator uses the Task tool path to dispatch Claude tasks to Claude Code teammates in a multi-domain prompt.
  • An external agent triggers claude -p in a CLI-only environment to obtain Claude analysis without using API keys.
  • Code generation requests are sent via the Anthropic API when ANTHROPIC_API_KEY is present.
  • A nested sub-agent uses Task → Task with the Agent Teams path for collaborative reasoning.
  • If no access exists, the orchestrator receives a SKIPPED output and continues without Claude.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers