performance-profiling
Scannednpx machina-cli add skill vudovn/antigravity-kit/performance-profiling --openclawPerformance Profiling
Measure, analyze, optimize - in that order.
🔧 Runtime Scripts
Execute these for automated profiling:
| Script | Purpose | Usage |
|---|---|---|
scripts/lighthouse_audit.py | Lighthouse performance audit | python scripts/lighthouse_audit.py https://example.com |
1. Core Web Vitals
Targets
| Metric | Good | Poor | Measures |
|---|---|---|---|
| LCP | < 2.5s | > 4.0s | Loading |
| INP | < 200ms | > 500ms | Interactivity |
| CLS | < 0.1 | > 0.25 | Stability |
When to Measure
| Stage | Tool |
|---|---|
| Development | Local Lighthouse |
| CI/CD | Lighthouse CI |
| Production | RUM (Real User Monitoring) |
2. Profiling Workflow
The 4-Step Process
1. BASELINE → Measure current state
2. IDENTIFY → Find the bottleneck
3. FIX → Make targeted change
4. VALIDATE → Confirm improvement
Profiling Tool Selection
| Problem | Tool |
|---|---|
| Page load | Lighthouse |
| Bundle size | Bundle analyzer |
| Runtime | DevTools Performance |
| Memory | DevTools Memory |
| Network | DevTools Network |
3. Bundle Analysis
What to Look For
| Issue | Indicator |
|---|---|
| Large dependencies | Top of bundle |
| Duplicate code | Multiple chunks |
| Unused code | Low coverage |
| Missing splits | Single large chunk |
Optimization Actions
| Finding | Action |
|---|---|
| Big library | Import specific modules |
| Duplicate deps | Dedupe, update versions |
| Route in main | Code split |
| Unused exports | Tree shake |
4. Runtime Profiling
Performance Tab Analysis
| Pattern | Meaning |
|---|---|
| Long tasks (>50ms) | UI blocking |
| Many small tasks | Possible batching opportunity |
| Layout/paint | Rendering bottleneck |
| Script | JavaScript execution |
Memory Tab Analysis
| Pattern | Meaning |
|---|---|
| Growing heap | Possible leak |
| Large retained | Check references |
| Detached DOM | Not cleaned up |
5. Common Bottlenecks
By Symptom
| Symptom | Likely Cause |
|---|---|
| Slow initial load | Large JS, render blocking |
| Slow interactions | Heavy event handlers |
| Jank during scroll | Layout thrashing |
| Growing memory | Leaks, retained refs |
6. Quick Win Priorities
| Priority | Action | Impact |
|---|---|---|
| 1 | Enable compression | High |
| 2 | Lazy load images | High |
| 3 | Code split routes | High |
| 4 | Cache static assets | Medium |
| 5 | Optimize images | Medium |
7. Anti-Patterns
| ❌ Don't | ✅ Do |
|---|---|
| Guess at problems | Profile first |
| Micro-optimize | Fix biggest issue |
| Optimize early | Optimize when needed |
| Ignore real users | Use RUM data |
Remember: The fastest code is code that doesn't run. Remove before optimizing.
Source
git clone https://github.com/vudovn/antigravity-kit/blob/main/.agent/skills/performance-profiling/SKILL.mdView on GitHub Overview
Performance profiling provides a principled approach to measuring runtime performance, identifying bottlenecks, and applying targeted optimizations. It emphasizes Core Web Vitals, profiling workflows, bundle analysis, and runtime/memory profiling to improve user experience.
How This Skill Works
Start by baseline measurement of current state using automated scripts such as Lighthouse audits. Then identify bottlenecks with profiling tools like Lighthouse, bundle analyzer, and DevTools, implement targeted fixes, and finally validate improvements with performance data from Lighthouse or real user monitoring.
When to Use It
- During feature development to establish a baseline with local Lighthouse measurements.
- In CI/CD to enforce performance gates with Lighthouse CI.
- In production to monitor real-user performance via Real User Monitoring.
- When optimizing bundles or dependencies using bundle analysis and code splitting.
- When diagnosing runtime or memory issues with DevTools Performance and Memory profiling.
Quick Start
- Step 1: Baseline — run a Lighthouse audit (e.g., python scripts/lighthouse_audit.py https://example.com) to measure current performance.
- Step 2: Identify — review results to locate bottlenecks (load, runtime, memory) and decide tools to use.
- Step 3: Fix & Validate — apply targeted changes (code splitting, lazy loading, compression) and re-run profiling to confirm improvements.
Best Practices
- Baseline first: measure current state before making changes.
- Profile with the right tool for the problem: Lighthouse for load, DevTools for runtime and memory, bundle analyzer for bundles.
- Target the biggest bottlenecks identified, such as large libraries, render-blocking code, or memory leaks.
- Implement concrete fixes (code splitting, lazy loading, compression, caching) and re-measure.
- Validate improvements with consistent metrics (LCP, INP, CLS, task durations, memory growth).
Example Use Cases
- Audit a homepage with Lighthouse to reduce load metrics like LCP and INP.
- Identify and remove duplicate dependencies in the bundle using a bundle analyzer.
- Split routes to improve initial load time and enable code splitting.
- Enable compression and caching of static assets as a quick win.
- Use DevTools Performance and Memory tabs to detect long tasks and memory leaks.