Get the FREE Ultimate OpenClaw Setup Guide →

tldr-stats

npx machina-cli add skill parcadei/Continuous-Claude-v3/tldr-stats --openclaw
Files (1)
SKILL.md
2.9 KB

TLDR Stats Skill

Show a beautiful dashboard with token usage, actual API costs, TLDR savings, and hook activity.

When to Use

  • See how much TLDR is saving you in real $ terms
  • Check total session token usage and costs
  • Before/after comparisons of TLDR effectiveness
  • Debug whether TLDR/hooks are being used
  • See which model is being used

Instructions

IMPORTANT: Run the script AND display the output to the user.

  1. Run the stats script:
python3 $CLAUDE_PROJECT_DIR/.claude/scripts/tldr_stats.py
  1. Copy the full output into your response so the user sees the dashboard directly in the chat. Do not just run the command silently - the user wants to see the stats.

Sample Output

╔══════════════════════════════════════════════════════════════╗
║  📊 Session Stats                                            ║
╚══════════════════════════════════════════════════════════════╝

  You've spent  $96.52  this session

  Tokens Used
        1.2M sent to Claude
      416.3K received back
       97.8K from prompt cache (8% reused)

  TLDR Savings

    You sent:               1.2M
    Without TLDR:           2.5M

    💰 TLDR saved you ~$18.83
    (Without TLDR: $115.35 → With TLDR: $96.52)

    File reads: 1.3M → 20.9K █████████░ 98% smaller

  TLDR Cache
    Re-reading the same file? TLDR remembers it.
    █████░░░░░░░░░░ 37% cache hits
    (35 reused / 60 parsed fresh)

  Hooks: 553 calls (✓ all ok)
  History: █▃▄ ▇▃▇▆ avg 84% compression
  Daemon: 24m up │ 3 sessions

Understanding the Numbers

MetricWhat it means
You've spentActual $ spent on Claude API this session
You sent / Without TLDRActual tokens vs what it would have been
TLDR saved youMoney saved by compressing file reads
File reads X → YRaw file tokens compressed to TLDR summary
Cache hitsHow often TLDR reuses parsed file results
History sparklineCompression % over recent sessions (█ = high)

Visual Elements

  • Progress bars show savings and cache efficiency at a glance
  • Sparklines show historical trends (█ = high savings, ▁ = low)
  • Colors indicate status (green = good, yellow = moderate, red = concern)
  • Emojis distinguish model types (🎭 Opus, 🎵 Sonnet, 🍃 Haiku)

Notes

  • Token savings vary by file size (big files = more savings)
  • Cache hit rate starts low, increases as you re-read files
  • Cost estimates use: Opus $15/1M, Sonnet $3/1M, Haiku $0.25/1M
  • Stats update in real-time as you work

Source

git clone https://github.com/parcadei/Continuous-Claude-v3/blob/main/.claude/skills/tldr-stats/SKILL.mdView on GitHub

Overview

TLDR Stats builds a beautiful dashboard showing session token usage, actual API costs, TLDR savings, and hook activity. It helps quantify the impact of TLDR, compare before/after scenarios, and debug whether TLDR and hooks are actively used.

How This Skill Works

A stats script runs to collect token counts, costs by model, cache hits, and hook calls, then renders a dashboard-style output. You run the script and paste the full output so the user can see the live stats directly in chat.

When to Use It

  • See how much TLDR is saving you in real $ terms
  • Check total session token usage and costs
  • Before/after comparisons of TLDR effectiveness
  • Debug whether TLDR/hooks are being used
  • See which model is being used

Quick Start

  1. Step 1: Run the stats script: python3 $CLAUDE_PROJECT_DIR/.claude/scripts/tldr_stats.py
  2. Step 2: Copy the full output into your response so the dashboard appears in chat
  3. Step 3: Review tokens, costs, TLDR savings, and hook activity at a glance

Best Practices

  • Always run the stats script and paste the full output so the dashboard renders in chat
  • Note the model costs (Opus, Sonnet, Haiku) and how they affect totals
  • Use cache hits and history sparklines to gauge TLDR effectiveness over time
  • Compare before/after sessions to measure savings and usage changes
  • Watch for hook activity and ensure TLDR is being invoked when expected

Example Use Cases

  • A developer evaluates TLDR impact by comparing two sessions with and without TLDR enabled
  • A product team tracks TLDR savings across a feature kickoff to quantify cost reductions
  • A maintainer debugs low hook usage by inspecting the TLDR stats dashboard
  • An engineer confirms which model was used in a given session via the stats view
  • An ops role monitors real-time token costs to optimize model selections

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers