Get the FREE Ultimate OpenClaw Setup Guide →
m

Token Counter

Scanned

@mkhaytman87

npx machina-cli add skill @mkhaytman87/token-counter --openclaw
Files (1)
SKILL.md
2.5 KB

Token Counter

Overview

Use this skill to produce token usage reports from local OpenClaw data. It parses session transcripts (.jsonl), session metadata, and cron definitions, then reports usage by category, client, tool, model, and top token consumers.

Quick Start

Run:

$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter --period 7d

Common Commands

  1. Basic report:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter --period 7d
  1. Focus on selected breakdowns:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \
  --period 1d \
  --breakdown tools,category,client
  1. Analyze one session:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \
  --session agent:main:cron:d3d76f7a-7090-41c3-bb19-e2324093f9b1
  1. Export JSON:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \
  --period 30d \
  --format json \
  --output $OPENCLAW_WORKSPACE/token-usage/token-usage-30d.json
  1. Persist daily snapshot:
$OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter \
  --period 1d \
  --save

This writes JSON to: $OPENCLAW_WORKSPACE/token-usage/daily/YYYY-MM-DD.json

Defaults and Data Sources

  • Sessions index: $OPENCLAW_DATA_DIR/agents/main/sessions/sessions.json
  • Session transcripts: $OPENCLAW_DATA_DIR/agents/main/sessions/*.jsonl
  • Cron definitions: $OPENCLAW_DATA_DIR/cron/jobs.json

The parser reads assistant usage fields for token counts and uses tool-call records for attribution.

Notes on Attribution

  • Tool token attribution is heuristic: assistant-message tokens are split across tool calls in that message.
  • Session totalTokens may come from either session index metadata or transcript usage sums (max is used).
  • Client detection is rules-based (personal, bonsai, mixed, unknown) using path/domain/email markers.

Validation

Run:

python3 $OPENCLAW_SKILLS_DIR/skill-creator/scripts/quick_validate.py \
  $OPENCLAW_SKILLS_DIR/token-counter

References

See:

  • references/classification-rules.md for category/client detection logic and keyword mapping.

Source

git clone https://clawhub.ai/mkhaytman87/token-counterView on GitHub

Overview

The Token Counter analyzes OpenClaw token usage from local data. It parses session transcripts (.jsonl), metadata, and cron definitions to report usage by category, client, tool, and model, highlighting top token consumers. This helps generate daily/weekly reports and per-session drilldowns for token-cost optimization with transcript evidence.

How This Skill Works

It reads local data sources: session transcripts (.jsonl), session metadata, and cron definitions. The parser uses assistant usage fields for token counts and tool-call records for attribution, and it resolves totals by taking the maximum between session index metadata and transcript usage sums. Client attribution uses rules-based detection, while tool attribution is heuristic, distributing assistant tokens across tool calls within messages.

When to Use It

  • When a user asks where tokens are being spent
  • When generating daily or weekly token reports
  • For per-session drilldowns to investigate cost drivers
  • When planning token-cost optimizations with evidence from transcripts
  • To audit token attribution accuracy across tools, clients, and categories

Quick Start

  1. Step 1: Run a basic report for a period, e.g., $OPENCLAW_SKILLS_DIR/token-counter/scripts/token-counter --period 7d
  2. Step 2: Narrow results with focused breakdowns, e.g., --breakdown tools,category,client
  3. Step 3: Analyze a session or export/save daily snapshots, e.g., --session <id> or --save

Best Practices

  • Verify data sources exist: sessions.json, transcripts, and cron definitions
  • Cross-check totals between session index and transcript sums using the max rule
  • Use breakdowns (e.g., tools, category, client) to pinpoint high-spend areas
  • Regularly export JSON snapshots for trend analysis and auditing
  • Review client attribution rules and tool-call mappings to improve accuracy

Example Use Cases

  • Identify weekly token spend by client and by tool to spot cost-heavy workflows
  • Drill down a high-spend session to see which tools contributed most tokens
  • Export a 30-day token usage JSON to share with stakeholders
  • Compare token usage across main, cron, and sub-agent sessions for a given period
  • Validate attribution by reconciling transcript usage with session index totals

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers