Context Compactor
Scanned@emberDesire
npx machina-cli add skill @emberDesire/context-compactor --openclawContext Compactor
Automatic context compaction for OpenClaw when using local models that don't properly report token limits or context overflow errors.
The Problem
Cloud APIs (Anthropic, OpenAI) report context overflow errors, allowing OpenClaw's built-in compaction to trigger. Local models (MLX, llama.cpp, Ollama) often:
- Silently truncate context
- Return garbage when context is exceeded
- Don't report accurate token counts
This leaves you with broken conversations when context gets too long.
The Solution
Context Compactor estimates tokens client-side and proactively summarizes older messages before hitting the model's limit.
How It Works
┌─────────────────────────────────────────────────────────────┐
│ 1. Message arrives │
│ 2. before_agent_start hook fires │
│ 3. Plugin estimates total context tokens │
│ 4. If over maxTokens: │
│ a. Split into "old" and "recent" messages │
│ b. Summarize old messages (LLM or fallback) │
│ c. Inject summary as compacted context │
│ 5. Agent sees: summary + recent + new message │
└─────────────────────────────────────────────────────────────┘
Installation
# One command setup (recommended)
npx jasper-context-compactor setup
# Restart gateway
openclaw gateway restart
The setup command automatically:
- Copies plugin files to
~/.openclaw/extensions/context-compactor/ - Adds plugin config to
openclaw.jsonwith sensible defaults
Configuration
Add to openclaw.json:
{
"plugins": {
"entries": {
"context-compactor": {
"enabled": true,
"config": {
"maxTokens": 8000,
"keepRecentTokens": 2000,
"summaryMaxTokens": 1000,
"charsPerToken": 4
}
}
}
}
}
Options
| Option | Default | Description |
|---|---|---|
enabled | true | Enable/disable the plugin |
maxTokens | 8000 | Max context tokens before compaction |
keepRecentTokens | 2000 | Tokens to preserve from recent messages |
summaryMaxTokens | 1000 | Max tokens for the summary |
charsPerToken | 4 | Token estimation ratio |
summaryModel | (session model) | Model to use for summarization |
Tuning for Your Model
MLX (8K context models):
{
"maxTokens": 6000,
"keepRecentTokens": 1500,
"charsPerToken": 4
}
Larger context (32K models):
{
"maxTokens": 28000,
"keepRecentTokens": 4000,
"charsPerToken": 4
}
Small context (4K models):
{
"maxTokens": 3000,
"keepRecentTokens": 800,
"charsPerToken": 4
}
Commands
/compact-now
Force clear the summary cache and trigger fresh compaction on next message.
/compact-now
/context-stats
Show current context token usage and whether compaction would trigger.
/context-stats
Output:
📊 Context Stats
Messages: 47 total
- User: 23
- Assistant: 24
- System: 0
Estimated Tokens: ~6,234
Limit: 8,000
Usage: 77.9%
✅ Within limits
How Summarization Works
When compaction triggers:
- Split messages into "old" (to summarize) and "recent" (to keep)
- Generate summary using the session model (or configured
summaryModel) - Cache the summary to avoid regenerating for the same content
- Inject context with the summary prepended
If the LLM runtime isn't available (e.g., during startup), a fallback truncation-based summary is used.
Differences from Built-in Compaction
| Feature | Built-in | Context Compactor |
|---|---|---|
| Trigger | Model reports overflow | Token estimate threshold |
| Works with local models | ❌ (need overflow error) | ✅ |
| Persists to transcript | ✅ | ❌ (session-only) |
| Summarization | Pi runtime | Plugin LLM call |
Context Compactor is complementary — it catches cases before they hit the model's hard limit.
Troubleshooting
Summary quality is poor:
- Try a better
summaryModel - Increase
summaryMaxTokens - The fallback truncation is used if LLM runtime isn't available
Compaction triggers too often:
- Increase
maxTokens - Decrease
keepRecentTokens(keeps less, summarizes earlier)
Not compacting when expected:
- Check
/context-statsto see current usage - Verify
enabled: truein config - Check logs for
[context-compactor]messages
Characters per token wrong:
- Default of 4 works for English
- Try 3 for CJK languages
- Try 5 for highly technical content
Logs
Enable debug logging:
{
"plugins": {
"entries": {
"context-compactor": {
"config": {
"logLevel": "debug"
}
}
}
}
}
Look for:
[context-compactor] Current context: ~XXXX tokens[context-compactor] Compacted X messages → summary
Links
Overview
Context Compactor automatically manages token context for local models that don't report limits. It estimates tokens on the client side and proactively summarizes older messages before hitting the model's limit, preventing silent truncation or garbled replies. This helps maintain coherent conversations with MLX, llama.cpp, or Ollama in OpenClaw.
How This Skill Works
On each incoming message, the before_agent_start hook triggers a token estimate. If total tokens exceed maxTokens, the plugin splits messages into old and recent, summarizes the old ones (via the session model or a fallback), and injects the summary as compacted context so the agent sees summary + recent + new. If the LLM runtime isn't available, a fallback truncation-based summary is used.
When to Use It
- Using local models (MLX, llama.cpp, Ollama) that don’t report token limits in OpenClaw
- When conversations overrun and you risk silent truncation or garbage replies
- To proactively manage context before hitting the model's limit
- For long-running chats where preserving recent context is critical
- When you want a config-driven option you can tune via openclaw.json
Quick Start
- Step 1: Install and restart: npx jasper-context-compactor setup; openclaw gateway restart
- Step 2: Add/adjust plugin config in openclaw.json with maxTokens, keepRecentTokens, summaryMaxTokens, charsPerToken
- Step 3: Send messages and optionally run /context-stats or /compact-now to manage context
Best Practices
- Enable the plugin in openclaw.json (enabled = true)
- Tune maxTokens, keepRecentTokens, summaryMaxTokens, and charsPerToken to fit your model
- Set a sensible summaryModel for quality summaries
- Test with /context-stats to understand usage and thresholds
- Use /compact-now to force a fresh compaction when needed
Example Use Cases
- An 8K MLX model chat that hits 6,000 tokens; compactor trims older messages and preserves the latest context
- A 32K model with maxTokens 28,000 and keepRecentTokens 4,000 keeps a long-running conversation coherent
- Startup time when the LLM runtime is unavailable falls back to truncation-based summarization
- Regulars use /context-stats to monitor token usage and plan compaction
- Administrators run /compact-now to refresh summaries after model updates