optimizing-performance
Scannednpx machina-cli add skill Putra213/claude-workflow-v2/optimizing-performance --openclawOptimizing Performance
Performance Optimization Workflow
Copy this checklist and track progress:
Performance Optimization Progress:
- [ ] Step 1: Measure baseline performance
- [ ] Step 2: Identify bottlenecks
- [ ] Step 3: Apply targeted optimizations
- [ ] Step 4: Measure again and compare
- [ ] Step 5: Repeat if targets not met
Critical Rule: Never optimize without data. Always profile before and after changes.
Step 1: Measure Baseline
Profiling Commands
# Node.js profiling
node --prof app.js
node --prof-process isolate*.log > profile.txt
# Python profiling
python -m cProfile -o profile.stats app.py
python -m pstats profile.stats
# Web performance
lighthouse https://example.com --output=json
Step 2: Identify Bottlenecks
Common Bottleneck Categories
| Category | Symptoms | Tools |
|---|---|---|
| CPU | High CPU usage, slow computation | Profiler, flame graphs |
| Memory | High RAM, GC pauses, OOM | Heap snapshots, memory profiler |
| I/O | Slow disk/network, waiting | strace, network inspector |
| Database | Slow queries, lock contention | Query analyzer, EXPLAIN |
Step 3: Apply Optimizations
Frontend Optimizations
Bundle Size:
// ❌ Import entire library
import _ from 'lodash';
// ✅ Import only needed functions
import debounce from 'lodash/debounce';
// ✅ Use dynamic imports for code splitting
const HeavyComponent = lazy(() => import('./HeavyComponent'));
Rendering:
// ❌ Render on every parent update
function Child({ data }) {
return <ExpensiveComponent data={data} />;
}
// ✅ Memoize when props don't change
const Child = memo(function Child({ data }) {
return <ExpensiveComponent data={data} />;
});
// ✅ Use useMemo for expensive computations
const processed = useMemo(() => expensiveCalc(data), [data]);
Images:
<!-- ❌ Unoptimized -->
<img src="large-image.jpg" />
<!-- ✅ Optimized -->
<img
src="image.webp"
srcset="image-300.webp 300w, image-600.webp 600w"
sizes="(max-width: 600px) 300px, 600px"
loading="lazy"
decoding="async"
/>
Backend Optimizations
Database Queries:
-- ❌ N+1 Query Problem
SELECT * FROM users;
-- Then for each user:
SELECT * FROM orders WHERE user_id = ?;
-- ✅ Single query with JOIN
SELECT u.*, o.*
FROM users u
LEFT JOIN orders o ON u.id = o.user_id;
-- ✅ Or use pagination
SELECT * FROM users LIMIT 100 OFFSET 0;
Caching Strategy:
// Multi-layer caching
const getUser = async (id) => {
// L1: In-memory cache (fastest)
let user = memoryCache.get(`user:${id}`);
if (user) return user;
// L2: Redis cache (fast)
user = await redis.get(`user:${id}`);
if (user) {
memoryCache.set(`user:${id}`, user, 60);
return JSON.parse(user);
}
// L3: Database (slow)
user = await db.users.findById(id);
await redis.setex(`user:${id}`, 3600, JSON.stringify(user));
memoryCache.set(`user:${id}`, user, 60);
return user;
};
Async Processing:
// ❌ Blocking operation
app.post('/upload', async (req, res) => {
await processVideo(req.file); // Takes 5 minutes
res.send('Done');
});
// ✅ Queue for background processing
app.post('/upload', async (req, res) => {
const jobId = await queue.add('processVideo', { file: req.file });
res.send({ jobId, status: 'processing' });
});
Algorithm Optimizations
// ❌ O(n²) - nested loops
function findDuplicates(arr) {
const duplicates = [];
for (let i = 0; i < arr.length; i++) {
for (let j = i + 1; j < arr.length; j++) {
if (arr[i] === arr[j]) duplicates.push(arr[i]);
}
}
return duplicates;
}
// ✅ O(n) - hash map
function findDuplicates(arr) {
const seen = new Set();
const duplicates = new Set();
for (const item of arr) {
if (seen.has(item)) duplicates.add(item);
seen.add(item);
}
return [...duplicates];
}
Step 4: Measure Again
After applying optimizations, re-run profiling and compare:
Comparison Checklist:
- [ ] Run same profiling tools as baseline
- [ ] Compare metrics before vs after
- [ ] Verify no regressions in other areas
- [ ] Document improvement percentages
Performance Targets
Web Vitals
| Metric | Good | Needs Work | Poor |
|---|---|---|---|
| LCP | < 2.5s | 2.5-4s | > 4s |
| FID | < 100ms | 100-300ms | > 300ms |
| CLS | < 0.1 | 0.1-0.25 | > 0.25 |
| TTFB | < 800ms | 800ms-1.8s | > 1.8s |
API Performance
| Metric | Target |
|---|---|
| P50 Latency | < 100ms |
| P95 Latency | < 500ms |
| P99 Latency | < 1s |
| Error Rate | < 0.1% |
Validation
After optimization, validate results:
Performance Validation:
- [ ] Metrics improved from baseline
- [ ] No functionality regressions
- [ ] No new errors introduced
- [ ] Changes are sustainable (not one-time fixes)
- [ ] Performance gains documented
If targets not met, return to Step 2 and identify remaining bottlenecks.
Source
git clone https://github.com/Putra213/claude-workflow-v2/blob/main/skills/optimizing-performance/SKILL.mdView on GitHub Overview
Optimizing Performance analyzes where an application slows down and applies targeted improvements across frontend, backend, and database layers. It emphasizes data-driven profiling to reduce load times, speed queries, and shrink bundle sizes.
How This Skill Works
The workflow starts with measuring a baseline using language- and web-performance profilers, then identifying bottlenecks in CPU, memory, I/O, or database categories. It then guides targeted optimizations across frontend (bundle size, rendering, images), backend (database queries, caching, async processing), and algorithm choices, followed by re-measuring to compare results. A critical rule is never optimize without data; profile before and after changes.
When to Use It
- Diagnosing overall slowness in an application
- Aiming to improve initial and total load times
- When database queries are slow or need optimization
- When frontend bundle size or rendering performance is a concern
- When performance issues are reported and you need a data-driven approach
Quick Start
- Step 1: Measure baseline performance using appropriate profiling commands (Node.js profiling, Python profiling, or Lighthouse)
- Step 2: Identify bottlenecks by category (CPU, Memory, I/O, Database) and select targeted optimizations
- Step 3: Re-measure performance and compare results; repeat if targets are not met
Best Practices
- Always start with measuring a baseline and track profiling data for later comparison
- Never optimize without data; profile before and after changes
- Use layer-appropriate tools: Node.js profilers, Python profilers, and Lighthouse for web performance
- Target optimizations rather than sweeping changes; consider code-splitting, caching, and pagination
- Measure after changes and iterate until targets are met
Example Use Cases
- Profile a Node.js app with node --prof and analyze isolate*.log to find CPU hotspots
- Profile a Python app with python -m cProfile and pstats to locate slow functions
- Use Lighthouse to measure web performance and identify slow rendering or for images
- Replace N+1 database queries with a single JOIN to reduce database load
- Implement multi-layer caching and a background queue to speed up heavy processing tasks