reasoning-trace-optimizer
Scannednpx machina-cli add skill Sdkwork-Cloud/skills-repository/interleaved_thinking --openclawReasoning Trace Optimizer
Debug and optimize AI agents by analyzing their reasoning traces. This skill uses MiniMax M2.1's interleaved thinking to provide deep insight into agent decision-making and generate concrete improvements.
When to Activate
- User asks to "debug agent", "analyze reasoning", or "optimize prompt"
- Agent task fails and user wants to understand why
- User mentions "context degradation", "tool confusion", or "instruction drift"
- Request to improve agent performance or reduce errors
- User wants to generate shareable learnings from debugging sessions
- After repeated failures on similar tasks
Core Concepts
Interleaved Thinking
Unlike standard reasoning models that think once at the start, interleaved thinking allows reasoning BETWEEN each tool interaction. This is critical because:
- Long-horizon tasks require maintaining focus across many turns
- External perturbations (tool outputs, environment changes) need real-time adaptation
- Debugging requires seeing HOW decisions were made, not just WHAT was output
The Optimization Loop
Execute Agent → Capture Traces → Analyze Patterns → Optimize Prompt → Re-run
↑____________|
Each iteration improves the prompt based on detected patterns until convergence.
Pattern Detection
Common failure patterns the analyzer detects:
| Pattern | Description |
|---|---|
context_degradation | Model loses track of information over long contexts |
tool_confusion | Model misunderstands tool capabilities or outputs |
instruction_drift | Model gradually deviates from original instructions |
goal_abandonment | Model stops pursuing the original goal |
circular_reasoning | Model repeats similar actions without progress |
premature_conclusion | Model concludes before completing the task |
Usage Modes
Mode 1: M2.1 Agent Debugging
Run a task through M2.1 and analyze its reasoning:
from reasoning_trace_optimizer import TraceCapture, TraceAnalyzer
capture = TraceCapture()
trace = capture.run(
task="Search for Python tutorials and summarize them",
system_prompt="You are a research assistant.",
tools=[search_tool],
tool_executor=execute_search
)
analyzer = TraceAnalyzer()
analysis = analyzer.analyze(trace)
print(f"Score: {analysis.overall_score}/100")
for pattern in analysis.patterns:
print(f"Found: {pattern.type.value} - {pattern.suggestion}")
Mode 2: Full Optimization Loop
Automatically iterate until the prompt is optimized:
from reasoning_trace_optimizer import OptimizationLoop, LoopConfig
config = LoopConfig(
max_iterations=5,
min_score_threshold=80.0,
)
loop = OptimizationLoop(config=config)
result = loop.run(
task="Analyze this codebase and suggest improvements",
initial_prompt="You are a code reviewer.",
tools=[read_file_tool, search_tool],
tool_executor=execute_tool
)
print(f"Improved: {result.initial_score} → {result.final_score}")
print(f"Final prompt:\n{result.final_prompt}")
Mode 3: Universal Session Analysis
Analyze any agent's previous thinking (works with Claude, GPT, etc.):
When this skill is activated in Claude Code, it can analyze the current session's thinking blocks to identify issues and suggest improvements.
/reasoning-trace-optimizer analyze-session
Mode 4: Generate Shareable Skills
Convert optimization learnings into reusable Agent Skills:
from reasoning_trace_optimizer import SkillGenerator
generator = SkillGenerator()
skill_path = generator.generate(
result=loop_result,
skill_name="web-search-best-practices",
output_dir="./skills"
)
CLI Commands
# Capture reasoning trace
rto capture "Search for Python tutorials" -s "You are a helpful assistant."
# Analyze a task
rto analyze "Debug this code" -o analysis.txt
# Run optimization loop
rto optimize "Research AI papers" --max-iterations 5 --generate-skill
# Generate skill from artifacts
rto generate-skill my-skill-name --artifacts-dir ./optimization_artifacts
Integration with Claude Code
Auto-trigger on Failure
Add to your hooks to automatically analyze failures:
{
"hooks": {
"post_tool_error": {
"command": "rto analyze-session --last-error"
}
}
}
On-demand Analysis
Use the slash command to analyze current session:
/reasoning-trace-optimizer
This will:
- Extract thinking blocks from the current session
- Identify patterns and issues
- Suggest prompt improvements
- Optionally update the system prompt
Guidelines
- Preserve full context: M2.1 requires full response history including thinking blocks for optimal performance
- Use appropriate tools: Define tools clearly with unambiguous descriptions
- Set realistic convergence thresholds: 5-10% improvement per iteration is typical
- Review generated skills: Auto-generated skills should be reviewed before sharing
- Monitor token usage: Each optimization iteration uses significant tokens
Examples
Before Optimization
System: You are a helpful assistant.
Issue: Agent called wrong tools, lost track of goal after 3 turns
Score: 45/100
Patterns: tool_confusion, goal_abandonment
After Optimization
System: You are a research assistant focused on finding accurate information.
IMPORTANT GUIDELINES:
- Always verify search results before summarizing
- If a tool returns an error, try an alternative approach
- Keep track of your original goal throughout the task
- Validate findings against multiple sources when possible
Issue: None
Score: 85/100
Patterns: None detected
References
- MiniMax M2.1 Documentation: https://platform.minimax.io/docs
- Interleaved Thinking Guide: See
docs/interleavedthinking.md - Agent Generalization: See
docs/agentthinking.md
Skill Metadata
Created: 2025-01-11 Author: Muratcan Koylan Version: 0.1.0 Powered by: MiniMax M2.1 Partnership: Built in collaboration with MiniMax AI
Source
git clone https://github.com/Sdkwork-Cloud/skills-repository/blob/main/packages/Agent-Skills-for-Context-Engineering/examples/interleaved_thinking/SKILL.mdView on GitHub Overview
Reasoning Trace Optimizer analyzes agent reasoning traces to reveal how decisions are made. It applies interleaved thinking to expose each tool interaction, enabling concrete prompt improvements and fewer errors. This helps diagnose failures, reduce context drift, and accelerate reliable performance.
How This Skill Works
Technically, it uses interleaved thinking to capture decisions between tool interactions, analyzes traces for failure patterns, and computes actionable improvements. The optimization loop then runs: Execute Agent → Capture Traces → Analyze Patterns → Optimize Prompt → Re-run, iterating until convergence. This makes decision points visible and prompts refinable for better outcomes.
When to Use It
- When you want to debug agent reasoning or analyze why it failed
- When the agent shows context degradation or tool confusion
- When you need to improve performance and reduce errors
- When you want to generate shareable learnings from debugging sessions
- After repeated failures on similar tasks
Quick Start
- Step 1: Trigger reasoning trace capture with debug agent or optimize prompt activation
- Step 2: Run TraceAnalyzer to analyze the captured traces and identify failure patterns
- Step 3: Apply the generated prompt improvements and re-run until the final score improves
Best Practices
- Enable trace capture for each run and review the resulting patterns before changing prompts
- Watch for common failure patterns: context_degradation, tool_confusion, instruction_drift, goal_abandonment, circular_reasoning, premature_conclusion
- Use the Optimization Loop to iterate until the score meets your threshold and the prompt stabilizes
- Document insights and convert them into reusable improvements or new agent skills
- Test improvements across different task types to ensure robustness
Example Use Cases
- Debug a web-search agent that struggles to summarize tutorials and identify where its reasoning derails
- Improve a code-review assistant by analyzing its decision points and prompts
- Diagnose context drift in long-running conversations to preserve task memory
- Reduce tool confusion when multiple tools are involved by clarifying tool outputs
- Generate shareable debugging learnings to create new agent skills