Get the FREE Ultimate OpenClaw Setup Guide →

performance-optimization

Scanned
npx machina-cli add skill addyosmani/agent-skills/performance-optimization --openclaw
Files (1)
SKILL.md
7.9 KB

Performance Optimization

Overview

Measure before optimizing. Performance work without measurement is guessing — and guessing leads to premature optimization that adds complexity without improving what matters. Profile first, identify the actual bottleneck, fix it, measure again. Optimize only what measurements prove matters.

When to Use

  • Performance requirements exist in the spec (load time budgets, response time SLAs)
  • Users or monitoring report slow behavior
  • Core Web Vitals scores are below thresholds
  • You suspect a change introduced a regression
  • Building features that handle large datasets or high traffic

When NOT to use: Don't optimize before you have evidence of a problem. Premature optimization adds complexity that costs more than the performance it gains.

Core Web Vitals Targets

MetricGoodNeeds ImprovementPoor
LCP (Largest Contentful Paint)≤ 2.5s≤ 4.0s> 4.0s
INP (Interaction to Next Paint)≤ 200ms≤ 500ms> 500ms
CLS (Cumulative Layout Shift)≤ 0.1≤ 0.25> 0.25

The Optimization Workflow

1. MEASURE  → Establish baseline with real data
2. IDENTIFY → Find the actual bottleneck (not assumed)
3. FIX      → Address the specific bottleneck
4. VERIFY   → Measure again, confirm improvement
5. GUARD    → Add monitoring or tests to prevent regression

Step 1: Measure

Frontend:

# Lighthouse in Chrome DevTools (or CI)
# Chrome DevTools → Performance tab → Record
# Chrome DevTools MCP → Performance trace

# Web Vitals library in code
import { onLCP, onINP, onCLS } from 'web-vitals';

onLCP(console.log);
onINP(console.log);
onCLS(console.log);

Backend:

# Response time logging
# Application Performance Monitoring (APM)
# Database query logging with timing

# Simple timing
console.time('db-query');
const result = await db.query(...);
console.timeEnd('db-query');

Step 2: Identify the Bottleneck

Common bottlenecks by category:

Frontend:

SymptomLikely CauseInvestigation
Slow LCPLarge images, render-blocking resources, slow serverCheck network waterfall, image sizes
High CLSImages without dimensions, late-loading content, font shiftsCheck layout shift attribution
Poor INPHeavy JavaScript on main thread, large DOM updatesCheck long tasks in Performance trace
Slow initial loadLarge bundle, many network requestsCheck bundle size, code splitting

Backend:

SymptomLikely CauseInvestigation
Slow API responsesN+1 queries, missing indexes, unoptimized queriesCheck database query log
Memory growthLeaked references, unbounded caches, large payloadsHeap snapshot analysis
CPU spikesSynchronous heavy computation, regex backtrackingCPU profiling
High latencyMissing caching, redundant computation, network hopsTrace requests through the stack

Step 3: Fix Common Anti-Patterns

N+1 Queries (Backend)

// BAD: N+1 — one query per task for the owner
const tasks = await db.tasks.findMany();
for (const task of tasks) {
  task.owner = await db.users.findUnique({ where: { id: task.ownerId } });
}

// GOOD: Single query with join/include
const tasks = await db.tasks.findMany({
  include: { owner: true },
});

Unbounded Data Fetching

// BAD: Fetching all records
const allTasks = await db.tasks.findMany();

// GOOD: Paginated with limits
const tasks = await db.tasks.findMany({
  take: 20,
  skip: (page - 1) * 20,
  orderBy: { createdAt: 'desc' },
});

Missing Image Optimization (Frontend)

<!-- BAD: No dimensions, no lazy loading, no responsive sizes -->
<img src="/hero.jpg" />

<!-- GOOD: Responsive, lazy-loaded, properly sized -->
<img
  src="/hero.jpg"
  srcset="/hero-400.webp 400w, /hero-800.webp 800w, /hero-1200.webp 1200w"
  sizes="(max-width: 768px) 100vw, 50vw"
  width="1200"
  height="600"
  loading="lazy"
  alt="Hero image description"
/>

Unnecessary Re-renders (React)

// BAD: Creates new object on every render, causing children to re-render
function TaskList() {
  return <TaskFilters options={{ sortBy: 'date', order: 'desc' }} />;
}

// GOOD: Stable reference
const DEFAULT_OPTIONS = { sortBy: 'date', order: 'desc' } as const;
function TaskList() {
  return <TaskFilters options={DEFAULT_OPTIONS} />;
}

// Use React.memo for expensive components
const TaskItem = React.memo(function TaskItem({ task }: Props) {
  return <div>{/* expensive render */}</div>;
});

// Use useMemo for expensive computations
function TaskStats({ tasks }: Props) {
  const stats = useMemo(() => calculateStats(tasks), [tasks]);
  return <div>{stats.completed} / {stats.total}</div>;
}

Large Bundle Size

// BAD: Importing entire library
import { format } from 'date-fns';

// GOOD: Tree-shakable import (if the library supports it)
import { format } from 'date-fns/format';

// GOOD: Dynamic import for heavy, rarely-used features
const ChartLibrary = lazy(() => import('./ChartLibrary'));

Missing Caching (Backend)

// Cache frequently-read, rarely-changed data
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes
let cachedConfig: AppConfig | null = null;
let cacheExpiry = 0;

async function getAppConfig(): Promise<AppConfig> {
  if (cachedConfig && Date.now() < cacheExpiry) {
    return cachedConfig;
  }
  cachedConfig = await db.config.findFirst();
  cacheExpiry = Date.now() + CACHE_TTL;
  return cachedConfig;
}

// HTTP caching headers for static assets
app.use('/static', express.static('public', {
  maxAge: '1y',           // Cache for 1 year
  immutable: true,        // Never revalidate (use content hashing in filenames)
}));

// Cache-Control for API responses
res.set('Cache-Control', 'public, max-age=300'); // 5 minutes

Performance Budget

Set budgets and enforce them:

JavaScript bundle: < 200KB gzipped (initial load)
CSS: < 50KB gzipped
Images: < 200KB per image (above the fold)
Fonts: < 100KB total
API response time: < 200ms (p95)
Time to Interactive: < 3.5s on 4G
Lighthouse Performance score: ≥ 90

Enforce in CI:

# Bundle size check
npx bundlesize --config bundlesize.config.json

# Lighthouse CI
npx lhci autorun

Common Rationalizations

RationalizationReality
"We'll optimize later"Performance debt compounds. Fix obvious anti-patterns now, defer micro-optimizations.
"It's fast on my machine"Your machine isn't the user's. Profile on representative hardware and networks.
"This optimization is obvious"If you didn't measure, you don't know. Profile first.
"Users won't notice 100ms"Research shows 100ms delays impact conversion rates. Users notice more than you think.
"The framework handles performance"Frameworks prevent some issues but can't fix N+1 queries or oversized bundles.

Red Flags

  • Optimization without profiling data to justify it
  • N+1 query patterns in data fetching
  • List endpoints without pagination
  • Images without dimensions, lazy loading, or responsive sizes
  • Bundle size growing without review
  • No performance monitoring in production
  • React.memo and useMemo everywhere (overusing is as bad as underusing)

Verification

After any performance-related change:

  • Before and after measurements exist (specific numbers)
  • The specific bottleneck is identified and addressed
  • Core Web Vitals are within "Good" thresholds
  • Bundle size hasn't increased significantly
  • No N+1 queries in new data fetching code
  • Performance budget passes in CI (if configured)
  • Existing tests still pass (optimization didn't break behavior)

Source

git clone https://github.com/addyosmani/agent-skills/blob/main/skills/performance-optimization/SKILL.mdView on GitHub

Overview

Performance optimization starts with measurement. The skill emphasizes profiling to identify the actual bottlenecks, fixing them, and re-measuring to verify improvements. Avoid premature optimization by only optimizing what measurements prove matters.

How This Skill Works

Follow the five step workflow: MEASURE, IDENTIFY, FIX, VERIFY, GUARD. Use frontend tools like Lighthouse and Web Vitals, and backend methods such as APM and timing logs to establish baselines and pinpoint bottlenecks. Implement targeted fixes, re-measure to confirm gains, and add monitoring to prevent regressions.

When to Use It

  • Performance requirements exist in the spec (load time budgets, response time SLAs).
  • Users or monitoring report slow behavior.
  • Core Web Vitals scores are below thresholds.
  • You suspect a change introduced a regression.
  • Building features that handle large datasets or high traffic.

Quick Start

  1. Step 1: Measure baseline with real data (Lighthouse/Web Vitals for frontend; APM/time logs for backend).
  2. Step 2: Identify the bottleneck using profiling, traces, and logs.
  3. Step 3: Fix the bottleneck, re-measure to verify improvement, and add guardrails (monitoring/tests).

Best Practices

  • Measure before optimizing: establish a real baseline with actual data.
  • Profile actual bottlenecks, not assumptions or guesses.
  • Prioritize fixes with measurable impact on user-perceived performance.
  • Tackle frontend and backend bottlenecks in a targeted order (e.g., reduce bundle size, then optimize queries).
  • Guard against regressions by adding monitoring or tests after fixes.

Example Use Cases

  • Frontend LCP improves after optimizing large hero image and enabling lazy loading.
  • Frontend CLS reduces by specifying image dimensions and stabilizing font loading.
  • Backend slow API responses addressed by removing N plus 1 queries with a joined fetch.
  • Unbounded data fetching replaced with paginated requests to improve latency.
  • Added Web Vitals instrumentation and APM to continuously monitor performance and catch regressions.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers