Get the FREE Ultimate OpenClaw Setup Guide →

context-optimization

npx machina-cli add skill guanyang/antigravity-skills/context-optimization --openclaw
Files (1)
SKILL.md
8.3 KB

Context Optimization Techniques

Context optimization extends the effective capacity of limited context windows through strategic compression, masking, caching, and partitioning. The goal is not to magically increase context windows but to make better use of available capacity. Effective optimization can double or triple effective context capacity without requiring larger models or longer contexts.

When to Activate

Activate this skill when:

  • Context limits constrain task complexity
  • Optimizing for cost reduction (fewer tokens = lower costs)
  • Reducing latency for long conversations
  • Implementing long-running agent systems
  • Needing to handle larger documents or conversations
  • Building production systems at scale

Core Concepts

Context optimization extends effective capacity through four primary strategies: compaction (summarizing context near limits), observation masking (replacing verbose outputs with references), KV-cache optimization (reusing cached computations), and context partitioning (splitting work across isolated contexts).

The key insight is that context quality matters more than quantity. Optimization preserves signal while reducing noise. The art lies in selecting what to keep versus what to discard, and when to apply each technique.

Detailed Topics

Compaction Strategies

What is Compaction Compaction is the practice of summarizing context contents when approaching limits, then reinitializing a new context window with the summary. This distills the contents of a context window in a high-fidelity manner, enabling the agent to continue with minimal performance degradation.

Compaction typically serves as the first lever in context optimization. The art lies in selecting what to keep versus what to discard.

Compaction Implementation Compaction works by identifying sections that can be compressed, generating summaries that capture essential points, and replacing full content with summaries. Priority for compression goes to tool outputs (replace with summaries), old turns (summarize early conversation), retrieved docs (summarize if recent versions exist), and never compress system prompt.

Summary Generation Effective summaries preserve different elements depending on message type:

Tool outputs: Preserve key findings, metrics, and conclusions. Remove verbose raw output.

Conversational turns: Preserve key decisions, commitments, and context shifts. Remove filler and back-and-forth.

Retrieved documents: Preserve key facts and claims. Remove supporting evidence and elaboration.

Observation Masking

The Observation Problem Tool outputs can comprise 80%+ of token usage in agent trajectories. Much of this is verbose output that has already served its purpose. Once an agent has used a tool output to make a decision, keeping the full output provides diminishing value while consuming significant context.

Observation masking replaces verbose tool outputs with compact references. The information remains accessible if needed but does not consume context continuously.

Masking Strategy Selection Not all observations should be masked equally:

Never mask: Observations critical to current task, observations from the most recent turn, observations used in active reasoning.

Consider masking: Observations from 3+ turns ago, verbose outputs with key points extractable, observations whose purpose has been served.

Always mask: Repeated outputs, boilerplate headers/footers, outputs already summarized in conversation.

KV-Cache Optimization

Understanding KV-Cache The KV-cache stores Key and Value tensors computed during inference, growing linearly with sequence length. Caching the KV-cache across requests sharing identical prefixes avoids recomputation.

Prefix caching reuses KV blocks across requests with identical prefixes using hash-based block matching. This dramatically reduces cost and latency for requests with common prefixes like system prompts.

Cache Optimization Patterns Optimize for caching by reordering context elements to maximize cache hits. Place stable elements first (system prompt, tool definitions), then frequently reused elements, then unique elements last.

Design prompts to maximize cache stability: avoid dynamic content like timestamps, use consistent formatting, keep structure stable across sessions.

Context Partitioning

Sub-Agent Partitioning The most aggressive form of context optimization is partitioning work across sub-agents with isolated contexts. Each sub-agent operates in a clean context focused on its subtask without carrying accumulated context from other subtasks.

This approach achieves separation of concerns—the detailed search context remains isolated within sub-agents while the coordinator focuses on synthesis and analysis.

Result Aggregation Aggregate results from partitioned subtasks by validating all partitions completed, merging compatible results, and summarizing if still too large.

Budget Management

Context Budget Allocation Design explicit context budgets. Allocate tokens to categories: system prompt, tool definitions, retrieved docs, message history, and reserved buffer. Monitor usage against budget and trigger optimization when approaching limits.

Trigger-Based Optimization Monitor signals for optimization triggers: token utilization above 80%, degradation indicators, and performance drops. Apply appropriate optimization techniques based on context composition.

Practical Guidance

Optimization Decision Framework

When to optimize:

  • Context utilization exceeds 70%
  • Response quality degrades as conversations extend
  • Costs increase due to long contexts
  • Latency increases with conversation length

What to apply:

  • Tool outputs dominate: observation masking
  • Retrieved documents dominate: summarization or partitioning
  • Message history dominates: compaction with summarization
  • Multiple components: combine strategies

Performance Considerations

Compaction should achieve 50-70% token reduction with less than 5% quality degradation. Masking should achieve 60-80% reduction in masked observations. Cache optimization should achieve 70%+ hit rate for stable workloads.

Monitor and iterate on optimization strategies based on measured effectiveness.

Examples

Example 1: Compaction Trigger

if context_tokens / context_limit > 0.8:
    context = compact_context(context)

Example 2: Observation Masking

if len(observation) > max_length:
    ref_id = store_observation(observation)
    return f"[Obs:{ref_id} elided. Key: {extract_key(observation)}]"

Example 3: Cache-Friendly Ordering

# Stable content first
context = [system_prompt, tool_definitions]  # Cacheable
context += [reused_templates]  # Reusable
context += [unique_content]  # Unique

Guidelines

  1. Measure before optimizing—know your current state
  2. Apply compaction before masking when possible
  3. Design for cache stability with consistent prompts
  4. Partition before context becomes problematic
  5. Monitor optimization effectiveness over time
  6. Balance token savings against quality preservation
  7. Test optimization at production scale
  8. Implement graceful degradation for edge cases

Integration

This skill builds on context-fundamentals and context-degradation. It connects to:

  • multi-agent-patterns - Partitioning as isolation
  • evaluation - Measuring optimization effectiveness
  • memory-systems - Offloading context to memory

References

Internal reference:

Related skills in this collection:

  • context-fundamentals - Context basics
  • context-degradation - Understanding when to optimize
  • evaluation - Measuring optimization

External resources:

  • Research on context window limitations
  • KV-cache optimization techniques
  • Production engineering guides

Skill Metadata

Created: 2025-12-20 Last Updated: 2025-12-20 Author: Agent Skills for Context Engineering Contributors Version: 1.0.0

Source

git clone https://github.com/guanyang/antigravity-skills/blob/main/skills/context-optimization/SKILL.mdView on GitHub

Overview

Context optimization extends the effective capacity of limited context windows using compaction, observation masking, KV-cache optimization, and context partitioning. It aims to preserve signal while reducing token usage and latency, enabling longer conversations and larger documents to be handled efficiently.

How This Skill Works

The approach analyzes what to keep, summarize, reference, cache, or partition. It applies compaction to near limit content, masks verbose outputs behind references, reuses cached computations via the KV-cache, and splits work across isolated contexts when needed. The result is more efficient context use without increasing model size or context window.

When to Use It

  • Context limits constrain task complexity
  • Optimizing for cost reduction (fewer tokens = lower costs)
  • Reducing latency for long conversations
  • Implementing long-running agent systems
  • Needing to handle larger documents or conversations

Quick Start

  1. Step 1: Detect when the current context is near limits and choose a primary optimization lever (compaction, masking, or partitioning).
  2. Step 2: Generate high-fidelity summaries for near-limit content and replace verbose sections with references, ensuring system prompt remains intact.
  3. Step 3: Enable KV-cache reuse for repeated computations and, if needed, partition the input to distribute work across isolated contexts.

Best Practices

  • Start with compaction when approaching limits and prioritize preserving tool outputs, key decisions, and recent context while avoiding compression of the system prompt.
  • Use observation masking strategically: never mask critical recent observations, consider masking observations from 3+ turns ago or verbose outputs with extractable key points, and always mask repeated or boilerplate outputs.
  • Implement KV-cache optimization by reusing cached Key-Value computations, and prune stale entries to keep the cache coherent and efficient.
  • Partition context for large documents or long conversations to split work across isolated contexts and maintain signal.
  • Continuously test for signal loss after compression and adjust thresholds to balance context efficiency with task fidelity.

Example Use Cases

  • A customer support chatbot reduces token costs by summarizing past chats as the conversation grows, while keeping key decisions intact.
  • A legal document review pipeline uses compaction to keep only essential facts from lengthy files, preserving accuracy for filings.
  • A multi-tool agent replaces verbose tool outputs with compact references to results, saving tokens during multi-step reasoning.
  • A production-grade onboarding assistant employs KV-cache to reuse repeated computations, speeding up long-running conversations with clients.
  • An enterprise document ingestion system partitions large documents into sections processed in parallel, maintaining context integrity across chunks.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers