Get the FREE Ultimate OpenClaw Setup Guide →

context-engineer

npx machina-cli add skill cacheforge-ai/cacheforge-skills/context-engineer --openclaw
Files (1)
SKILL.md
3.0 KB

When to use this skill

Use this skill when the user wants to:

  • Understand where their context window tokens are going
  • Analyze workspace files (SKILL.md, SOUL.md, MEMORY.md, etc.) for bloat
  • Audit tool definitions for redundancy and overhead
  • Get a comprehensive context efficiency report
  • Compare before/after snapshots to measure optimization progress
  • Optimize system prompts for token efficiency

Commands

# Analyze workspace context files — token counts, efficiency scores, recommendations
python3 skills/context-engineer/context.py analyze --workspace ~/.openclaw/workspace

# Analyze with a custom budget and save a snapshot for later comparison
python3 skills/context-engineer/context.py analyze --workspace ~/.openclaw/workspace --budget 128000 --snapshot before.json

# Audit tool definitions for overhead and overlap
python3 skills/context-engineer/context.py audit-tools --config ~/.openclaw/openclaw.json

# Generate a comprehensive context engineering report
python3 skills/context-engineer/context.py report --workspace ~/.openclaw/workspace --format terminal

# Compare two snapshots to see projected token savings
python3 skills/context-engineer/context.py compare --before before.json --after after.json

What It Analyzes

  • System prompt efficiency — Length, redundancy detection, compression potential
  • Tool definition overhead — Count tools, per-tool token cost, identify unused/overlapping
  • Memory file bloat — MEMORY.md size, stale entries, optimization suggestions
  • Skill overhead — Installed skills contributing to context, per-skill token cost
  • Context budget — What % of model context window is consumed by static content vs available for conversation

Options

  • --workspace PATH — Path to workspace directory (default: ~/.openclaw/workspace)
  • --config PATH — Path to OpenClaw config file (default: ~/.openclaw/openclaw.json)
  • --budget N — Context window token budget (default: 200000)
  • --snapshot FILE — Save analysis snapshot to FILE for later comparison
  • --format terminal — Output format (currently: terminal)

Notes

  • Token estimates are approximate (~4 characters per token). For precise counts, use a model-specific tokenizer.
  • No external dependencies required — runs with Python 3 stdlib only.
  • Built by Anvil AI — context engineering experts. https://labs.anvil-ai.io

Source

git clone https://github.com/cacheforge-ai/cacheforge-skills/blob/main/skills/context-engineer/SKILL.mdView on GitHub

Overview

This skill analyzes where tokens go and audits workspace files for bloat. It provides a comprehensive context efficiency report and token-cost breakdown. It also supports before/after snapshots to quantify improvements and guide optimization.

How This Skill Works

The tool parses system prompts, tool definitions, memory files, and installed skills to compute per-item token costs. It flags redundancy, overlap, and bloat, then suggests compression, removal, or reordering. Users can generate reports and save snapshots to track optimization progress over time.

When to Use It

  • Understand where context window tokens are going and visualize token flow
  • Analyze workspace files (SKILL.md, SOUL.md, MEMORY.md, etc.) for bloated content
  • Audit tool definitions for overhead and redundancy
  • Generate a comprehensive context efficiency report
  • Compare before/after snapshots to measure token-savings and optimization progress

Quick Start

  1. Step 1: Analyze workspace: python3 skills/context-engineer/context.py analyze --workspace ~/.openclaw/workspace
  2. Step 2: Audit tools (optional before deep dive): python3 skills/context-engineer/context.py audit-tools --config ~/.openclaw/openclaw.json
  3. Step 3: Generate report: python3 skills/context-engineer/context.py report --workspace ~/.openclaw/workspace --format terminal

Best Practices

  • Run analyze with a consistent budget and save a snapshot to enable future comparisons
  • Regularly audit MEMORY.md and tool definitions to catch drift and bloat
  • Prioritize system prompt optimization before heavy tool tinkering for large gains
  • Use snapshots to quantify token savings and track progress over time
  • Document changes and compare successive reports to maintain a clear optimization trail

Example Use Cases

  • A team reduces static content in system prompts after a dedicated context audit, freeing significant tokens for conversation
  • Redundant tools are identified and removed, shrinking per-tool token costs and improving responsiveness
  • Memory file bloat is trimmed by pruning stale entries and compressing long-lived data
  • A comprehensive report highlights bottlenecks across skills and prompts, guiding targeted refactors
  • Snapshots before/after show measurable token savings and cleaner context budgets for agents

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers