bridge-codex
Scannednpx machina-cli add skill mikeng-io/agent-skills/bridge-codex --openclawBridge: Codex Multi-Agent Adapter
This file is a REFERENCE DOCUMENT. Any orchestrating skill reads it via the Read tool and embeds its instructions directly into Task agent prompts. It is not invoked as a standalone skill — it is a reusable set of instructions for Codex review dispatch via MCP server or CLI.
Input schema, output schema, verdict logic, artifact format, and status semantics are defined in bridge-commons/SKILL.md. This file covers Codex-specific connection detection, reasoning level, prompt adaptation, and execution.
Bridge Identity
bridge: codex
model_family: openai/codex
availability: conditional
connection_preference:
1: native-dispatch # Executor is Codex — multi-agent dispatch (experimental)
2: mcp # Any executor with MCP access — mcp__codex__codex server
3: cli # Any other executor — codex exec
4: halt # None available — surface advisory, offer setup
Step 1: Pre-Flight — Connection Detection
Check A: Native Dispatch?
If the executor is Codex CLI with multi-agent support enabled, this is the preferred path — spawn parallel Codex agents rather than routing through MCP or CLI.
# Check if running inside a Codex execution context
echo ${CODEX_SESSION_ID:+found}
# Check if multi-agent feature is enabled
codex features list 2>/dev/null | grep -q "multi_agent" && echo "enabled"
If in a Codex session AND multi-agent is enabled → use native dispatch (multi-agent path).
This is an experimental feature. If multi-agent is not enabled, or the executor is not Codex → proceed to Check B.
Check B: MCP Server Configured?
Look for a codex entry in the active MCP configuration:
# Check project-level MCP config
cat .mcp.json 2>/dev/null | python3 -c "import sys,json; d=json.load(sys.stdin); print('found' if 'codex' in d.get('mcpServers',{}) else 'not-found')" 2>/dev/null
# Or check Claude's global MCP settings
cat ~/.claude.json 2>/dev/null | python3 -c "import sys,json; d=json.load(sys.stdin); print('found' if 'codex' in str(d) else 'not-found')" 2>/dev/null
If found → use MCP path (Step 3A). No further pre-flight needed — MCP server handles auth internally.
Check C: CLI Available?
which codex
If found → proceed to Check D.
If not found → no connection available — go to Step 2 (Advisory).
Check D: Authenticated? (CLI path only)
codex login status
Exit code 0 → authenticated. Other → go to Step 2 (Advisory) with reason: not_authenticated.
Check E: Multi-Agent Feature Enabled? (CLI path only — optional)
codex features list
Look for multi_agent marked as enabled.
- If enabled → proceed with parallel multi-agent dispatch (one sub-agent per domain)
- If not enabled → proceed in single-agent mode (one Codex session reviews all domains together)
Multi-agent is a progressive enhancement. Single-agent mode is a valid fallback — do not halt.
Record multi_agent_enabled: true/false in output for caller transparency.
→ Use CLI path (Step 3B).
Step 2: Advisory — Halt and Present Options
Do not skip silently. Surface the appropriate message to the user and wait for a choice.
Advisory: MCP Server Not Configured + CLI Not Found
⚠ Codex is not connected. This bridge requires either the Codex MCP server
or the Codex CLI.
Options:
[1] Set up MCP server automatically
I will add the Codex MCP server to .mcp.json so future sessions
use it without any CLI installation needed.
Requires: Node.js 18+ and npx available.
[2] Install the Codex CLI
Run: npm install -g @openai/codex
Then re-run this review.
[3] Skip Codex bridge
Continue the review without Codex. Other available bridges will run.
[4] Abort
Stop the entire review.
What would you like to do? (1/2/3/4)
If user chooses [1] — Auto-setup MCP server:
Write (or merge) into .mcp.json:
{
"mcpServers": {
"codex": {
"command": "npx",
"args": ["-y", "codex", "mcp-server"]
}
}
}
Then verify the server is reachable. If successful → continue with MCP path (Step 3A). If verification fails → tell the user and offer options [2]/[3]/[4] again.
If user chooses [2]: Return status: HALTED, halt_reason: cli_not_installed. Show install command.
If user chooses [3]: Return status: SKIPPED, skip_reason: user_chose_skip.
If user chooses [4]: Return status: ABORTED. Calling orchestrator must stop entire review.
Advisory: CLI Found But Not Authenticated
⚠ Codex CLI found but not authenticated.
To log in:
codex login # Browser OAuth flow (interactive)
codex login --device-auth # Device code flow (headless/CI)
After authenticating, re-run this review.
Or:
[1] Skip Codex bridge and continue
[2] Abort the entire review
Return status: HALTED, halt_reason: not_authenticated.
Non-Interactive Environments (Automated Pipelines)
If no interactive context is available, return status: HALTED with the full advisory text in halt_message. Never silently skip in a way that hides a configuration gap.
Reasoning Level Selection
Evaluate the review context and select the Codex reasoning level before building the prompt. This applies to both MCP and CLI paths.
Decision Signals
| Signal | Reasoning Level |
|---|---|
| Security audit, cryptographic review, financial compliance | xhigh |
| Multi-component architecture, 3+ CRITICAL findings expected, complex dependency chains | high |
| Standard code review, single-domain analysis, routine audit | medium |
Evaluate in this order:
- If request explicitly mentions "critical", "security", "cryptographic", "financial", "compliance" →
xhigh - If scope covers 20+ files OR 3+ domains with HIGH risk signals →
high - Otherwise →
medium
Xhigh Alert (MANDATORY)
When xhigh is selected, alert the user before proceeding:
⚠ Reasoning level: XHIGH
Codex will use maximum reasoning depth for this review.
This increases token usage and may take 2–3× longer than standard.
Continue? (y/n)
If user declines → fall back to high. Return reasoning_level: "high" in output.
Embedding Reasoning Level
MCP path (Step 3A): pass as reasoning parameter
CLI path (Step 3B): pass as --config reasoning-effort={level} (config override)
Store selected level in reasoning_level output field.
Step 3: Build Domain Prompt
Codex's multi-agent capability means the prompt is addressed to a coordinator, not a single domain expert. This differs from the bridge-commons Agent Prompt Template (which addresses one expert per call). Adapt as follows:
You are a multi-agent code review coordinator. Spawn one agent per domain
below, run them in parallel using your multi-agent capability, wait for all
to complete, then return a consolidated findings JSON.
Review scope: {review_scope}
Context: {context_summary}
Intensity: {intensity}
Domains to analyze (spawn one agent per domain):
{for each domain:
"- {domain_name}: focus on {focus_areas from domain-registry}"}
Each agent must return outputs using the schema from bridge-commons:
{
"domain": "...",
"outputs": [
{
"severity": "CRITICAL | HIGH | MEDIUM | LOW | INFO",
"title": "...",
"description": "...",
"evidence": "...",
"action": "..."
}
]
}
After all agents complete, consolidate all findings and return:
{
"domains_analyzed": [...],
"outputs": [...],
"verdict": "PASS | FAIL | CONCERNS"
}
In single-agent mode, drop the coordinator framing and use the bridge-commons Agent Prompt Template directly, covering all domains in one prompt.
Timeout Estimation
Use bridge-commons base timeout table and intensity multiplier. Codex multi-agent adds sub-agent spawn overhead — apply a higher base when multi-agent is enabled:
# When multi_agent_enabled: true — increase base by 50%
# e.g., 5-20 files: 180s → 270s to account for agent spawn latency
# When multi_agent_enabled: false — use bridge-commons base times directly
No separate bridge multiplier otherwise.
Step 3A: Execute via MCP Server (Preferred)
Use the codex MCP tool directly. The MCP server runs codex mcp-server and exposes two tools:
Model Selection — Check Latest at Runtime
Before calling either MCP or CLI, determine the current latest Codex model:
# Via CLI — lists available models
codex prompt --models 2>/dev/null | head -5
# If unavailable, omit model field to use server default
Do NOT hardcode a model name. If model discovery fails, omit the model parameter and let the server select its default.
Tool: codex — Start a session
Call: mcp__codex__codex
Parameters:
prompt: {constructed_prompt}
approval-policy: "never" # No interactive approval prompts
sandbox: "read-only" # Analysis only — no file writes
model: {latest from models list, or omit}
reasoning: "{medium|high|xhigh}" # From Reasoning Level Selection
Capture structuredContent.threadId from response for multi-turn use.
Tool: codex-reply — Continue session (if needed)
Call: mcp__codex__codex-reply
Parameters:
prompt: "Summarize and consolidate all agent findings into the JSON format specified."
threadId: {threadId from previous call}
The codex-reply call implements the bridge-commons Post-Analysis Protocol for the MCP path. Use codex-reply for each subsequent round — the thread maintains full Round 1 history, so only inject the context packet:
Call: mcp__codex__codex-reply
Parameters:
prompt: "{role-specific Round N prompt from bridge-commons context packet}"
threadId: {threadId from Round 1}
Run one codex + N codex-reply calls per role, one role at a time or in parallel sessions.
Step 3B: Execute via CLI (Fallback)
# Detect latest model first (if CLI supports it)
CODEX_MODEL=$(codex prompt --models 2>/dev/null | awk 'NR==1{print $1}')
MODEL_FLAG=${CODEX_MODEL:+--model $CODEX_MODEL} # omit flag if empty
timeout {final_timeout} codex exec "{constructed_prompt}" \
--sandbox read-only \
--ask-for-approval never \
--json \
--output-last-message /tmp/codex-bridge-{review_id}.json \
--ephemeral \
--skip-git-repo-check \
$MODEL_FLAG \
--config reasoning-effort={medium|high|xhigh}
For the Post-Analysis Protocol via CLI, use separate codex exec calls per round — no session continuity. Embed the full previous-round context in each Round N prompt (same stateless pattern as Gemini CLI).
CLI Error Handling
| Exit code | Meaning | Action |
|---|---|---|
| 0 | Success | Parse --output-last-message file for findings |
| 124 | Timeout (shell) | Return SKIPPED, skip_reason: timeout_after_{n}s |
| Other | CLI error | Capture stderr, return SKIPPED with error detail |
| Valid exit, invalid JSON | Parse error | Attempt partial extraction; else SKIPPED |
Output
See bridge-commons Output Schema. Bridge-specific fields:
{
"bridge": "codex",
"model_family": "openai/codex",
"connection_used": "native-dispatch | mcp | cli",
"multi_agent_enabled": true,
"reasoning_level": "medium | high | xhigh"
}
Output ID prefix: X (e.g., X001, X002).
Notes
- MCP server is preferred — no CLI install needed, auth handled internally, persistent sessions via
codex-reply - Auto-setup option — orchestrator can write
.mcp.jsonto enable MCP server without user installing anything codex exec≠codex— barecodexopens an interactive session; always usecodex execfor programmatic use--sandbox read-only+--ask-for-approval neverare required for analysis-only mode- HALTED ≠ SKIPPED — HALTED means the user must make a choice before the review can continue
- Model: check latest via
codex prompt --modelsat runtime; omit to use server default — never hardcode a model name - X-high reasoning requires explicit user confirmation before proceeding — never activate silently
- Reasoning level persists in output (
reasoning_levelfield) for caller transparency - Timeout base increases when multi-agent is enabled (agent spawn overhead)
Source
git clone https://github.com/mikeng-io/agent-skills/blob/master/skills/bridge-codex/SKILL.mdView on GitHub Overview
Bridge-codex is a reference adapter used by orchestrating skills to dispatch Codex-based review across multiple agents. It supports an MCP server path with auto-setup, a CLI fallback, and an interactive pre-flight advisory when not configured, embedding the correct flags for Codex-based review. It’s usable by deep-council, deep-review, deep-audit, or any future skill needing Codex-based review.
How This Skill Works
This is not a standalone skill; it’s a reusable set of instructions embedded into task prompts via the Read tool. It defines a bridge identity and a pre-flight flow (Check A–E) to choose between native Codex dispatch, MCP, or CLI paths, including multi_agent_enabled signaling. The dispatch path is selected automatically based on configuration, and the resulting prompt is prepared with the appropriate flags for Codex-based review.
When to Use It
- When orchestrating Codex-based review across multiple domains and require parallel multi-agent dispatch
- When an MCP server is configured and you want MCP routing with auto-setup
- When Codex CLI is installed and authenticated and you prefer CLI-based review
- When you need an interactive advisory to guide setup if no connection is configured
- When embedding Codex prompt adaptations and proper flags into orchestrator prompts
Quick Start
- Step 1: Identify whether MCP, CLI, or native Codex path should be used by performing pre-flight checks
- Step 2: If MCP is configured, route through MCP; else if CLI is available, prepare CLI path; else show advisory
- Step 3: Dispatch Codex-based review with the appropriate flags and embed instructions into prompts via Read
Best Practices
- Always run the pre-flight checks (A–E) before dispatching Codex review
- Prefer native Codex dispatch when multi-agent is available and enabled
- If MCP is configured, route through MCP to simplify authentication
- If using CLI, verify authentication and multi_agent capability before dispatch
- Embed and propagate correct flags and advisory messages in the generated prompts
Example Use Cases
- A deep-review orchestrator uses MCP path to dispatch Codex-based reviews across several agents and aggregates results
- A deep-audit workflow falls back to the Codex CLI path after confirming CLI availability and login status
- An orchestration prints an interactive advisory when MCP and CLI are unavailable, guiding setup
- Auto-setup is triggered to configure MCP server for codex, enabling seamless dispatch
- In a Codex-enabled session, native multi-agent dispatch is used when supported by the executor