Get the FREE Ultimate OpenClaw Setup Guide →

codex

Use Caution
npx machina-cli add skill softaworks/agent-toolkit/codex --openclaw
Files (1)
SKILL.md
5.1 KB

Codex Skill Guide

Running a Task

  1. Default to gpt-5.2 model. Ask the user (via AskUserQuestion) which reasoning effort to use (xhigh,high, medium, or low). User can override model if needed (see Model Options below).
  2. Select the sandbox mode required for the task; default to --sandbox read-only unless edits or network access are necessary.
  3. Assemble the command with the appropriate options:
    • -m, --model <MODEL>
    • --config model_reasoning_effort="<high|medium|low>"
    • --sandbox <read-only|workspace-write|danger-full-access>
    • --full-auto
    • -C, --cd <DIR>
    • --skip-git-repo-check
  4. Always use --skip-git-repo-check.
  5. When continuing a previous session, use codex exec --skip-git-repo-check resume --last via stdin. When resuming don't use any configuration flags unless explicitly requested by the user e.g. if he species the model or the reasoning effort when requesting to resume a session. Resume syntax: echo "your prompt here" | codex exec --skip-git-repo-check resume --last 2>/dev/null. All flags have to be inserted between exec and resume.
  6. IMPORTANT: By default, append 2>/dev/null to all codex exec commands to suppress thinking tokens (stderr). Only show stderr if the user explicitly requests to see thinking tokens or if debugging is needed.
  7. Run the command, capture stdout/stderr (filtered as appropriate), and summarize the outcome for the user.
  8. After Codex completes, inform the user: "You can resume this Codex session at any time by saying 'codex resume' or asking me to continue with additional analysis or changes."

Quick Reference

Use caseSandbox modeKey flags
Read-only review or analysisread-only--sandbox read-only 2>/dev/null
Apply local editsworkspace-write--sandbox workspace-write --full-auto 2>/dev/null
Permit network or broad accessdanger-full-access--sandbox danger-full-access --full-auto 2>/dev/null
Resume recent sessionInherited from originalecho "prompt" | codex exec --skip-git-repo-check resume --last 2>/dev/null (no flags allowed)
Run from another directoryMatch task needs-C <DIR> plus other flags 2>/dev/null

Model Options

ModelBest forContext windowKey features
gpt-5.2-maxMax model: Ultra-complex reasoning, deep problem analysis400K input / 128K output76.3% SWE-bench, adaptive reasoning, $1.25/$10.00
gpt-5.2Flagship model: Software engineering, agentic coding workflows400K input / 128K output76.3% SWE-bench, adaptive reasoning, $1.25/$10.00
gpt-5.2-miniCost-efficient coding (4x more usage allowance)400K input / 128K outputNear SOTA performance, $0.25/$2.00
gpt-5.1-thinkingUltra-complex reasoning, deep problem analysis400K input / 128K outputAdaptive thinking depth, runs 2x slower on hardest tasks

GPT-5.2 Advantages: 76.3% SWE-bench (vs 72.8% GPT-5), 30% faster on average tasks, better tool handling, reduced hallucinations, improved code quality. Knowledge cutoff: September 30, 2024.

Reasoning Effort Levels:

  • xhigh - Ultra-complex tasks (deep problem analysis, complex reasoning, deep understanding of the problem)
  • high - Complex tasks (refactoring, architecture, security analysis, performance optimization)
  • medium - Standard tasks (refactoring, code organization, feature additions, bug fixes)
  • low - Simple tasks (quick fixes, simple changes, code formatting, documentation)

Cached Input Discount: 90% off ($0.125/M tokens) for repeated context, cache lasts up to 24 hours.

Following Up

  • After every codex command, immediately use AskUserQuestion to confirm next steps, collect clarifications, or decide whether to resume with codex exec resume --last.
  • When resuming, pipe the new prompt via stdin: echo "new prompt" | codex exec resume --last 2>/dev/null. The resumed session automatically uses the same model, reasoning effort, and sandbox mode from the original session.
  • Restate the chosen model, reasoning effort, and sandbox mode when proposing follow-up actions.

Error Handling

  • Stop and report failures whenever codex --version or a codex exec command exits non-zero; request direction before retrying.
  • Before you use high-impact flags (--full-auto, --sandbox danger-full-access, --skip-git-repo-check) ask the user for permission using AskUserQuestion unless it was already given.
  • When output includes warnings or partial results, summarize them and ask how to adjust using AskUserQuestion.

CLI Version

Requires Codex CLI v0.57.0 or later for GPT-5.2 model support. The CLI defaults to gpt-5.2 on macOS/Linux and gpt-5.2 on Windows. Check version: codex --version

Use /model slash command within a Codex session to switch models, or configure default in ~/.codex/config.toml.

Source

git clone https://github.com/softaworks/agent-toolkit/blob/main/skills/codex/SKILL.mdView on GitHub

Overview

Codex is a CLI tool to run OpenAI Codex for code analysis, refactoring, and automated edits via codex exec and codex resume. It defaults to the GPT-5.2 model for state-of-the-art software engineering and lets you tune reasoning effort and sandbox mode. It also supports resuming ongoing Codex sessions and precise output capture.

How This Skill Works

You start with a default GPT-5.2 model and optionally pick a reasoning effort level (xhigh, high, medium, or low). Then choose a sandbox mode (read-only by default; workspace-write for edits; danger-full-access for network access). The command is assembled with -m/--model, --config model_reasoning_effort, --sandbox, --full-auto, -C/--cd, and --skip-git-repo-check. Always include --skip-git-repo-check and append 2>/dev/null to suppress thinking tokens, unless you request to see them. After execution, Codex summarizes the outcome; you can resume later with codex resume using the provided syntax.

When to Use It

  • Read-only review or analysis of a codebase to identify issues or opportunities for refactoring.
  • Apply local edits or automated changes across files using workspace-write sandbox and --full-auto.
  • Permit network access or broad access when dependencies or external data are needed during analysis.
  • Resume a recent Codex session to continue analysis or edits from where you left off.
  • Run Codex from another directory using -C <DIR> to target a specific project path.

Quick Start

  1. Step 1: Default to gpt-5.2 and ask the user which reasoning effort to use (xhigh, high, medium, low).
  2. Step 2: Choose the sandbox mode (read-only by default; switch to workspace-write or danger-full-access if edits or network are needed).
  3. Step 3: Run codex exec with -m/--model, --config, --sandbox, --full-auto, -C/--cd, and --skip-git-repo-check; append 2>/dev/null; summarize output and offer to resume with codex resume.

Best Practices

  • Default to the gpt-5.2 model and explicitly ask for the reasoning effort level (xhigh, high, medium, or low) when starting a task.
  • Select the sandbox mode that matches the task: read-only for analysis, workspace-write for edits, or danger-full-access for network-enabled work.
  • Always include --skip-git-repo-check in codex exec commands to satisfy safety checks and consistency.
  • When resuming a session, use the dedicated resume syntax and avoid adding unnecessary flags unless requested.
  • Append 2>/dev/null by default to suppress thinking tokens; reveal them only if the user explicitly asks to debug.

Example Use Cases

  • A developer asks Codex to perform a static analysis on a Python module to identify refactoring opportunities.
  • You need to apply naming convention fixes across multiple JavaScript files using automated edits.
  • The task requires fetching dependencies from the network, so you switch to danger-full-access sandbox with full-auto.
  • You want to continue a previous Codex session to refine a refactor and test new changes.
  • You start Codex from a different project directory to target a specific repository.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers