code-profiler
npx machina-cli add skill a5c-ai/babysitter/code-profiler --openclawCode Profiler Skill
Purpose
Profile algorithm implementations to identify performance bottlenecks and optimization opportunities.
Capabilities
- Runtime profiling
- Memory profiling
- Cache miss analysis
- Hot spot identification
- Optimization suggestions
- Comparative benchmarking
Target Processes
- code-level-optimization
- complexity-optimization
- memory-optimization
Profiling Dimensions
Time Profiling
- Function-level timing
- Line-by-line profiling
- Call graph analysis
- Hot spot detection
Memory Profiling
- Heap allocation tracking
- Memory leak detection
- Peak memory usage
- Allocation patterns
Cache Analysis
- Cache miss rates
- Memory access patterns
- Data locality issues
Input Schema
{
"type": "object",
"properties": {
"code": { "type": "string" },
"language": { "type": "string" },
"profileType": {
"type": "string",
"enum": ["time", "memory", "cache", "all"]
},
"testInput": { "type": "string" },
"iterations": { "type": "integer", "default": 1 }
},
"required": ["code", "profileType"]
}
Output Schema
{
"type": "object",
"properties": {
"success": { "type": "boolean" },
"timing": { "type": "object" },
"memory": { "type": "object" },
"hotSpots": { "type": "array" },
"recommendations": { "type": "array" }
},
"required": ["success"]
}
Integration
Can integrate with profiling tools like gprof, perf, Valgrind, cProfile, and language-specific profilers.
Source
git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/specializations/algorithms-optimization/skills/code-profiler/SKILL.mdView on GitHub Overview
Code Profiler analyzes runtime and memory to locate bottlenecks and optimization opportunities. It covers time, memory, and cache analysis, producing hot spots, allocation patterns, and benchmarks to guide improvements. It can integrate with popular profiling tools across languages to fit your workflow.
How This Skill Works
You provide code, language, and a profileType (time, memory, cache, or all). The profiler instruments or samples the target process, collects function-level timing, memory allocations, and cache miss data, then outputs hotspots and actionable recommendations.
When to Use It
- When a function or script runs slower than expected
- When memory usage grows or leaks are suspected
- When CPU performance is limited by poor data locality or cache misses
- When comparing alternative implementations or algorithms
- When you need a reliable baseline before and after optimization
Quick Start
- Step 1: Provide code and specify language and profileType (time, memory, cache, or all)
- Step 2: Run profiling with representative testInput and iterations
- Step 3: Review timing, memory, hotSpots, and recommendations, then iterate
Best Practices
- Run with representative inputs and multiple iterations to stabilize measurements
- Start with all profiling dimensions to get a full picture
- Identify hot spots first, then target fixes with smaller, repeatable runs
- Correlate memory allocations with specific functions to locate leaks or bloat
- Use a consistent profiler toolchain (gprof, perf, Valgrind, cProfile) across runs and compare results
Example Use Cases
- Profile a Python data cleaning script to cut runtime by focusing on hot spots and memory allocations
- Analyze a C++ pathfinding algorithm to reduce cache misses and improve locality
- Detect memory leaks in a long-running service by tracing heap allocations
- Benchmark two sorting implementations to choose the most cache-friendly
- Profile a web request handler to minimize latency on hot code paths