perf-lighthouse
Scannednpx machina-cli add skill tech-leads-club/agent-skills/perf-lighthouse --openclawLighthouse Audits
CLI Quick Start
# Install
npm install -g lighthouse
# Basic audit
lighthouse https://example.com
# Mobile performance only (faster)
lighthouse https://example.com --preset=perf --form-factor=mobile
# Output JSON for parsing
lighthouse https://example.com --output=json --output-path=./report.json
# Output HTML report
lighthouse https://example.com --output=html --output-path=./report.html
Common Flags
--preset=perf # Performance only (skip accessibility, SEO, etc.)
--form-factor=mobile # Mobile device emulation (default)
--form-factor=desktop # Desktop
--throttling-method=devtools # More accurate throttling
--only-categories=performance,accessibility # Specific categories
--chrome-flags="--headless" # Headless Chrome
Performance Budgets
Create budget.json:
[
{
"resourceSizes": [
{ "resourceType": "script", "budget": 200 },
{ "resourceType": "image", "budget": 300 },
{ "resourceType": "stylesheet", "budget": 50 },
{ "resourceType": "total", "budget": 500 }
],
"resourceCounts": [{ "resourceType": "third-party", "budget": 5 }],
"timings": [
{ "metric": "interactive", "budget": 3000 },
{ "metric": "first-contentful-paint", "budget": 1500 },
{ "metric": "largest-contentful-paint", "budget": 2500 }
]
}
]
Run with budget:
lighthouse https://example.com --budget-path=./budget.json
Node API
import lighthouse from 'lighthouse'
import * as chromeLauncher from 'chrome-launcher'
async function runAudit(url) {
const chrome = await chromeLauncher.launch({ chromeFlags: ['--headless'] })
const result = await lighthouse(url, {
port: chrome.port,
onlyCategories: ['performance'],
formFactor: 'mobile',
throttling: {
cpuSlowdownMultiplier: 4,
},
})
await chrome.kill()
const { performance } = result.lhr.categories
const { 'largest-contentful-paint': lcp } = result.lhr.audits
return {
score: Math.round(performance.score * 100),
lcp: lcp.numericValue,
}
}
GitHub Actions
# .github/workflows/lighthouse.yml
name: Lighthouse
on:
pull_request:
push:
branches: [main]
jobs:
lighthouse:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build site
run: npm ci && npm run build
- name: Run Lighthouse
uses: treosh/lighthouse-ci-action@v11
with:
urls: |
http://localhost:3000
http://localhost:3000/about
budgetPath: ./budget.json
uploadArtifacts: true
temporaryPublicStorage: true
env:
LHCI_GITHUB_APP_TOKEN: ${{ secrets.LHCI_GITHUB_APP_TOKEN }}
Lighthouse CI (LHCI)
For full CI integration with historical tracking:
# Install
npm install -g @lhci/cli
# Initialize config
lhci wizard
Creates lighthouserc.js:
module.exports = {
ci: {
collect: {
url: ['http://localhost:3000/', 'http://localhost:3000/about'],
startServerCommand: 'npm run start',
numberOfRuns: 3,
},
assert: {
assertions: {
'categories:performance': ['error', { minScore: 0.9 }],
'categories:accessibility': ['warn', { minScore: 0.9 }],
'first-contentful-paint': ['error', { maxNumericValue: 1500 }],
'largest-contentful-paint': ['error', { maxNumericValue: 2500 }],
'cumulative-layout-shift': ['error', { maxNumericValue: 0.1 }],
},
},
upload: {
target: 'temporary-public-storage', // or 'lhci' for self-hosted
},
},
}
Run:
lhci autorun
Parse JSON Report
import fs from 'fs'
const report = JSON.parse(fs.readFileSync('./report.json'))
// Overall scores (0-1, multiply by 100 for percentage)
const scores = {
performance: report.categories.performance.score,
accessibility: report.categories.accessibility.score,
seo: report.categories.seo.score,
}
// Core Web Vitals
const vitals = {
lcp: report.audits['largest-contentful-paint'].numericValue,
cls: report.audits['cumulative-layout-shift'].numericValue,
fcp: report.audits['first-contentful-paint'].numericValue,
tbt: report.audits['total-blocking-time'].numericValue,
}
// Failed audits
const failed = Object.values(report.audits)
.filter((a) => a.score !== null && a.score < 0.9)
.map((a) => ({ id: a.id, score: a.score, title: a.title }))
Compare Builds
# Save baseline
lighthouse https://prod.example.com --output=json --output-path=baseline.json
# Run on PR
lighthouse https://preview.example.com --output=json --output-path=pr.json
# Compare (custom script)
node compare-reports.js baseline.json pr.json
Simple comparison script:
const baseline = JSON.parse(fs.readFileSync(process.argv[2]))
const pr = JSON.parse(fs.readFileSync(process.argv[3]))
const metrics = ['largest-contentful-paint', 'cumulative-layout-shift', 'total-blocking-time']
metrics.forEach((metric) => {
const base = baseline.audits[metric].numericValue
const current = pr.audits[metric].numericValue
const diff = (((current - base) / base) * 100).toFixed(1)
const emoji = current <= base ? '✅' : '❌'
console.log(`${emoji} ${metric}: ${diff}% (${base.toFixed(0)} → ${current.toFixed(0)})`)
})
Troubleshooting
| Issue | Solution |
|---|---|
| Inconsistent scores | Run multiple times (--number-of-runs=3), use median |
| Chrome not found | Set CHROME_PATH env var |
| Timeouts | Increase with --max-wait-for-load=60000 |
| Auth required | Use --extra-headers or puppeteer script |
Source
git clone https://github.com/tech-leads-club/agent-skills/blob/main/packages/skills-catalog/skills/(performance)/perf-lighthouse/SKILL.mdView on GitHub Overview
perf-lighthouse lets you run Lighthouse audits locally via CLI or Node API, then parse reports and apply performance budgets. It helps you measure site performance, understand Lighthouse scores, and automate audits in CI. Use it to define budgets and trigger checks during development and deployment.
How This Skill Works
Run audits with the Lighthouse CLI or via the Node API to launch Chrome, collect results, and extract metrics like score and LCP. You can tailor outputs with presets, form-factor, and category filters, and enforce budgets through a budget.json file. The workflow supports generating JSON/HTML outputs for parsing or dashboards and integrating into CI pipelines.
When to Use It
- Measure site performance locally and interpret Lighthouse scores
- Define and enforce resource and timing budgets via budget.json
- Integrate Lighthouse audits into CI workflows (GitHub Actions, LHCI)
- Audit with presets and filters (mobile/desktop, performance-only, etc.)
- Export machine-readable results (JSON) for dashboards or automation
Quick Start
- Step 1: Install Lighthouse CLI (npm install -g lighthouse)
- Step 2: Run a basic audit (lighthouse https://example.com) and view JSON/HTML outputs
- Step 3: Create a budget.json and run with budget: lighthouse https://example.com --budget-path=./budget.json
Best Practices
- Start from a baseline budget.json and iterate budgets over time
- Use --preset=perf and --form-factor mobile to reflect real-user conditions
- Output JSON for automation and artifact storage in CI
- Combine with --only-categories to focus on performance metrics
- Test audits across dev and production environments to detect regressions
Example Use Cases
- Create a budget.json with resourceSizes, resourceCounts, and timings, then run: lighthouse https://example.com --budget-path=./budget.json
- Use the Node API to extract score and largest-contentful-paint (LCP) from result.lhr and expose a concise report
- In GitHub Actions, run audits with treosh/lighthouse-ci-action and supply budgetPath for automated checks
- Initialize LHCI configuration with lhci wizard to enable historical tracking and assertions
- Audit with mobile form factor and performance category to align with real-user mobile experiences