micro-optimizer
Scannednpx machina-cli add skill a5c-ai/babysitter/micro-optimizer --openclawMicro-Optimizer Skill
Purpose
Apply language-specific micro-optimizations to squeeze maximum performance from competitive programming solutions.
Capabilities
- C++ optimization tricks (fast I/O, pragma optimizations)
- Python optimization (PyPy hints, list comprehensions)
- Memory layout optimization
- Vectorization opportunities
- Compiler-specific optimizations
Target Processes
- code-level-optimization
- io-optimization
- memory-optimization
Optimization Catalog
C++ Optimizations
- Fast I/O:
ios_base::sync_with_stdio(false) - Pragma optimizations:
#pragma GCC optimize - Inline expansion
- Loop unrolling
- Memory prefetching
Python Optimizations
- Use PyPy when possible
- List comprehensions over loops
- Local variable caching
__slots__for classes- Avoiding global lookups
General Optimizations
- Branch prediction hints
- Cache-friendly data layout
- Avoiding unnecessary copies
- Bit manipulation tricks
Input Schema
{
"type": "object",
"properties": {
"code": { "type": "string" },
"language": {
"type": "string",
"enum": ["cpp", "python", "java"]
},
"optimizationLevel": {
"type": "string",
"enum": ["safe", "aggressive", "maximum"]
},
"preserveReadability": { "type": "boolean", "default": true }
},
"required": ["code", "language"]
}
Output Schema
{
"type": "object",
"properties": {
"success": { "type": "boolean" },
"optimizedCode": { "type": "string" },
"appliedOptimizations": { "type": "array" },
"expectedSpeedup": { "type": "string" },
"warnings": { "type": "array" }
},
"required": ["success", "optimizedCode"]
}
Source
git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/specializations/algorithms-optimization/skills/micro-optimizer/SKILL.mdView on GitHub Overview
The Micro-Optimizer skill applies language-specific micro-optimizations to squeeze maximum performance from competitive programming solutions. It covers C++ I/O tweaks, pragma flags, Python idioms, memory layout, and vectorization, helping you shave precious milliseconds while keeping readability when requested.
How This Skill Works
The skill analyzes hot spots in code and applies language-tailored optimizations (e.g., fast I/O and inline behavior in C++, list comprehensions and local caching in Python) to produce an optimized version and a speedup estimate. It leverages the optimization catalog to guide changes and outputs a structured result indicating applied optimizations.
When to Use It
- When you need to squeeze extra performance in tight time limits for competitive programming
- When I/O is a bottleneck and language-specific fast paths help
- When memory layout and cache locality can reduce latency
- When hot loops benefit from language-tailored optimizations (e.g., inline, prefetch)
- When you want compiler hints and vectorization opportunities to accelerate computation
Quick Start
- Step 1: Profile the solution to identify hot loops and bottlenecks
- Step 2: Apply language-specific micro-optimizations guided by the optimization catalog
- Step 3: Benchmark and verify speedups and correctness across test cases
Best Practices
- Profile hot paths before optimizing
- Apply one optimization at a time and benchmark
- Prefer readability-safe options unless aggressive optimization is needed
- Use language-specific guidelines: fast I/O for C++, list comprehensions for Python, cache-friendly layouts
- Document changes and ensure correctness across edge cases
Example Use Cases
- C++: use ios_base::sync_with_stdio(false) and cin.tie(nullptr) for faster input
- C++: apply #pragma GCC optimize to enable aggressive optimizations
- Python: replace loops with list comprehensions to build results
- Python: cache local variables to avoid global lookups
- Memory layout: organize data to improve cache locality and prefetching