context-optimization
npx machina-cli add skill athola/claude-night-market/context-optimization --openclawTable of Contents
- Quick Start
- When to Use
- Core Hub Responsibilities
- Module Selection Strategy
- Context Classification
- Integration Points
- Resources
Context Optimization Hub
Quick Start
Basic Usage
# Analyze current context usage
python -m conserve.context_analyzer
When To Use
- Threshold Alert: When context usage approaches 50% of the window.
- Complex Tasks: For operations requiring multi-file analysis or long tool chains.
When NOT To Use
- Simple single-step tasks with low context usage
- Already using mcp-code-execution for tool chains
Core Hub Responsibilities
- Assess context pressure and MECW compliance.
- Route to appropriate specialized modules.
- Coordinate subagent-based workflows.
- Manage token budget allocation across modules.
- Synthesize results from modular execution.
Module Selection Strategy
def select_optimal_modules(context_situation, task_complexity):
if context_situation == "CRITICAL":
return ['mecw-assessment', 'subagent-coordination']
elif task_complexity == 'high':
return ['mecw-principles', 'subagent-coordination']
else:
return ['mecw-assessment']
Context Classification
| Utilization | Status | Action |
|---|---|---|
| < 30% | LOW | Continue normally |
| 30-50% | MODERATE | Monitor, apply principles |
| > 50% | CRITICAL | Immediate optimization required |
Large Output Handling (Claude Code 2.1.2+)
Behavior Change: Large bash command and tool outputs are saved to disk instead of being truncated; file references are provided for access.
Impact on Context Optimization
| Scenario | Before 2.1.2 | After 2.1.2 |
|---|---|---|
| Large test output | Truncated, partial data | Full output via file reference |
| Verbose build logs | Lost after 30K chars | Complete, accessible on-demand |
| Context pressure | Less from truncation | Same - only loaded when read |
Best Practices
- Avoid pre-emptive reads: Large outputs are referenced, not automatically loaded into context.
- Read selectively: Use
head,tail, orgrepon file references. - Leverage full data: Quality gates can access complete test results via files.
- Monitor growth: File references are small, but reading the full files adds to context.
Integration Points
- Token Conservation: Receives usage strategies, returns MECW-compliant optimizations.
- CPU/GPU Performance: Aligns context optimization with resource constraints.
- MCP Code Execution: Delegates complex patterns to specialized MCP modules.
Resources
- MECW Theory: See
modules/mecw-principles.mdfor core concepts and the 50% rule. - MECW Theory (Extended): See
modules/mecw-theory.mdfor pressure levels, compliance checking, and monitoring patterns. - Context Analysis: See
modules/mecw-assessment.mdfor risk identification. - Workflow Delegation: See
modules/subagent-coordination.mdfor decomposition patterns. - Context Waiting: See
modules/context-waiting.mdfor deferred loading strategies.
Troubleshooting
Common Issues
If context usage remains high after optimization, check for large files that were read entirely rather than selectively. If MECW assessments fail, ensure that your environment provides accurate token count metadata. For permission errors when writing output logs to /tmp, verify that the project's temporary directory is writable.
Source
git clone https://github.com/athola/claude-night-market/blob/master/plugins/conserve/skills/context-optimization/SKILL.mdView on GitHub Overview
Context optimization helps you assess context pressure before tackling complex tasks. It routes work to specialized MECW compliant modules to maintain efficiency and prevent context overflows. This is essential for multi-file analyses, long tool chains, and high context workloads.
How This Skill Works
Context optimization monitors usage, compares it to thresholds, and routes work to specialized MECW compliant modules. It coordinates subagent workflows, manages token budgets, and synthesizes results from modular execution to keep context pressure in check.
When to Use It
- Context usage nears 50 percent of the window
- Tasks require decomposition into sub tasks
- Complex multi file analyses or long tool chains
- High context pressure requiring MECW compliant optimization
- Before starting a complex workflow when MCP code execution is not already engaged
Quick Start
- Step 1: Analyze current context usage with python -m conserve.context_analyzer
- Step 2: If usage is high or tasks are complex, route to MECW and subagent modules
- Step 3: Coordinate workflows, monitor token budgets, and synthesize results
Best Practices
- Monitor context usage with the analyzer before starting work
- Route to MECW compliant modules and coordinate subagents
- Coordinate token budget across modules to avoid waste
- Synthesize modular results into a single coherent output
- Use file references for large outputs and read selectively
Example Use Cases
- Optimizing a data pipeline that touches multiple files and CLI tools
- Coordinating a multi file code analysis before a major refactor
- Managing a ML experiment with verbose logs and long tool chains
- Orchestrating a security audit across several modules and toolchains
- Decomposing a research task that spans multiple data sources