Model Usage
Verified@JustAskNudge
npx machina-cli add skill @JustAskNudge/model-usage-skill --openclawModel usage
Overview
Get per-model usage cost from CodexBar's local cost logs. Supports "current model" (most recent daily entry) or "all models" summaries for Codex or Claude.
TODO: add Linux CLI support guidance once CodexBar CLI install path is documented for Linux.
Quick start
- Fetch cost JSON via CodexBar CLI or pass a JSON file.
- Use the bundled script to summarize by model.
python {baseDir}/scripts/model_usage.py --provider codex --mode current
python {baseDir}/scripts/model_usage.py --provider codex --mode all
python {baseDir}/scripts/model_usage.py --provider claude --mode all --format json --pretty
Current model logic
- Uses the most recent daily row with
modelBreakdowns. - Picks the model with the highest cost in that row.
- Falls back to the last entry in
modelsUsedwhen breakdowns are missing. - Override with
--model <name>when you need a specific model.
Inputs
- Default: runs
codexbar cost --format json --provider <codex|claude>. - File or stdin:
codexbar cost --provider codex --format json > /tmp/cost.json
python {baseDir}/scripts/model_usage.py --input /tmp/cost.json --mode all
cat /tmp/cost.json | python {baseDir}/scripts/model_usage.py --input - --mode current
Output
- Text (default) or JSON (
--format json --pretty). - Values are cost-only per model; tokens are not split by model in CodexBar output.
References
- Read
references/codexbar-cli.mdfor CLI flags and cost JSON fields.
Overview
Get per-model usage cost from CodexBar's local logs for Codex or Claude. It supports reporting the current (most recent) model or a full breakdown across all models, and can output in text or JSON for automation. Triggered when model-level usage data is requested or when you need a scriptable per-model summary from codexbar cost JSON.
How This Skill Works
The tool reads CodexBar cost data, selecting the current model from the latest daily row with modelBreakdowns or falling back to the last entry in modelsUsed if breakdowns are missing. You can override with --model to target a specific model. Outputs can be plain text or JSON via --format json --pretty for easy integration.
When to Use It
- When you need per-model usage/cost data from CodexBar
- When you require a scriptable per-model summary from codexbar cost JSON
- When you want the current (most recent) model's cost
- When you need a full all-model cost breakdown for comparison
- When testing or benchmarking by forcing a specific model with --model
Quick Start
- Step 1: Fetch cost JSON via codexbar cost or provide a JSON file
- Step 2: Run the summarizer, e.g., python {baseDir}/scripts/model_usage.py --provider codex --mode all
- Step 3: Output as JSON with --format json --pretty for automation, or view text by default
Best Practices
- Use current mode for live costs and all mode for a complete breakdown
- Specify provider codex or claude to match your data source
- Output JSON (--format json --pretty) when feeding pipelines or dashboards
- Provide input via file or stdin to fit into automation workflows
- Rely on the modelBreakdowns logic first, with fallback to modelsUsed if breakdowns are missing
Example Use Cases
- Billing a Codex-based service and needing the current model cost for chargeback
- CI job comparing per-model costs across Codex and Claude models
- Script exporting per-model costs to a monitoring dashboard
- Weekly cost review by model for budgeting and optimization
- Testing model cost differences by forcing a specific model with --model