Get the FREE Ultimate OpenClaw Setup Guide →

context-optimization

npx machina-cli add skill athola/claude-night-market/context-optimization --openclaw
Files (1)
SKILL.md
4.8 KB

Table of Contents

Context Optimization Hub

Quick Start

Basic Usage

# Analyze current context usage
python -m conserve.context_analyzer

When To Use

  • Threshold Alert: When context usage approaches 50% of the window.
  • Complex Tasks: For operations requiring multi-file analysis or long tool chains.

When NOT To Use

  • Simple single-step tasks with low context usage
  • Already using mcp-code-execution for tool chains

Core Hub Responsibilities

  1. Assess context pressure and MECW compliance.
  2. Route to appropriate specialized modules.
  3. Coordinate subagent-based workflows.
  4. Manage token budget allocation across modules.
  5. Synthesize results from modular execution.

Module Selection Strategy

def select_optimal_modules(context_situation, task_complexity):
    if context_situation == "CRITICAL":
        return ['mecw-assessment', 'subagent-coordination']
    elif task_complexity == 'high':
        return ['mecw-principles', 'subagent-coordination']
    else:
        return ['mecw-assessment']

Context Classification

UtilizationStatusAction
< 30%LOWContinue normally
30-50%MODERATEMonitor, apply principles
> 50%CRITICALImmediate optimization required

Large Output Handling (Claude Code 2.1.2+)

Behavior Change: Large bash command and tool outputs are saved to disk instead of being truncated; file references are provided for access.

Impact on Context Optimization

ScenarioBefore 2.1.2After 2.1.2
Large test outputTruncated, partial dataFull output via file reference
Verbose build logsLost after 30K charsComplete, accessible on-demand
Context pressureLess from truncationSame - only loaded when read

Best Practices

  • Avoid pre-emptive reads: Large outputs are referenced, not automatically loaded into context.
  • Read selectively: Use head, tail, or grep on file references.
  • Leverage full data: Quality gates can access complete test results via files.
  • Monitor growth: File references are small, but reading the full files adds to context.

Integration Points

  • Token Conservation: Receives usage strategies, returns MECW-compliant optimizations.
  • CPU/GPU Performance: Aligns context optimization with resource constraints.
  • MCP Code Execution: Delegates complex patterns to specialized MCP modules.

Resources

  • MECW Theory: See modules/mecw-principles.md for core concepts and the 50% rule.
  • MECW Theory (Extended): See modules/mecw-theory.md for pressure levels, compliance checking, and monitoring patterns.
  • Context Analysis: See modules/mecw-assessment.md for risk identification.
  • Workflow Delegation: See modules/subagent-coordination.md for decomposition patterns.
  • Context Waiting: See modules/context-waiting.md for deferred loading strategies.

Troubleshooting

Common Issues

If context usage remains high after optimization, check for large files that were read entirely rather than selectively. If MECW assessments fail, ensure that your environment provides accurate token count metadata. For permission errors when writing output logs to /tmp, verify that the project's temporary directory is writable.

Source

git clone https://github.com/athola/claude-night-market/blob/master/plugins/conserve/skills/context-optimization/SKILL.mdView on GitHub

Overview

Context optimization helps you assess context pressure before tackling complex tasks. It routes work to specialized MECW compliant modules to maintain efficiency and prevent context overflows. This is essential for multi-file analyses, long tool chains, and high context workloads.

How This Skill Works

Context optimization monitors usage, compares it to thresholds, and routes work to specialized MECW compliant modules. It coordinates subagent workflows, manages token budgets, and synthesizes results from modular execution to keep context pressure in check.

When to Use It

  • Context usage nears 50 percent of the window
  • Tasks require decomposition into sub tasks
  • Complex multi file analyses or long tool chains
  • High context pressure requiring MECW compliant optimization
  • Before starting a complex workflow when MCP code execution is not already engaged

Quick Start

  1. Step 1: Analyze current context usage with python -m conserve.context_analyzer
  2. Step 2: If usage is high or tasks are complex, route to MECW and subagent modules
  3. Step 3: Coordinate workflows, monitor token budgets, and synthesize results

Best Practices

  • Monitor context usage with the analyzer before starting work
  • Route to MECW compliant modules and coordinate subagents
  • Coordinate token budget across modules to avoid waste
  • Synthesize modular results into a single coherent output
  • Use file references for large outputs and read selectively

Example Use Cases

  • Optimizing a data pipeline that touches multiple files and CLI tools
  • Coordinating a multi file code analysis before a major refactor
  • Managing a ML experiment with verbose logs and long tool chains
  • Orchestrating a security audit across several modules and toolchains
  • Decomposing a research task that spans multiple data sources

Frequently Asked Questions

Add this skill to your agents

Related Skills

Sponsor this space

Reach thousands of developers