chrome-devtools
Scannednpx machina-cli add skill Microck/ordinary-claude-skills/chrome-devtools --openclawChrome DevTools Agent Skill
Browser automation via executable Puppeteer scripts. All scripts output JSON for easy parsing.
Quick Start
CRITICAL: Always check pwd before running scripts.
Installation
Step 1: Install System Dependencies (Linux/WSL only)
On Linux/WSL, Chrome requires system libraries. Install them first:
pwd # Should show current working directory
cd .claude/skills/chrome-devtools/scripts
./install-deps.sh # Auto-detects OS and installs required libs
Supports: Ubuntu, Debian, Fedora, RHEL, CentOS, Arch, Manjaro
macOS/Windows: Skip this step (dependencies bundled with Chrome)
Step 2: Install Node Dependencies
npm install # Installs puppeteer, debug, yargs
Step 3: Install ImageMagick (Optional, Recommended)
ImageMagick enables automatic screenshot compression to keep files under 5MB:
macOS:
brew install imagemagick
Ubuntu/Debian/WSL:
sudo apt-get install imagemagick
Verify:
magick -version # or: convert -version
Without ImageMagick, screenshots >5MB will not be compressed (may fail to load in Gemini/Claude).
Test
node navigate.js --url https://example.com
# Output: {"success": true, "url": "https://example.com", "title": "Example Domain"}
Available Scripts
All scripts are in .claude/skills/chrome-devtools/scripts/
CRITICAL: Always check pwd before running scripts.
Script Usage
./scripts/README.md
Core Automation
navigate.js- Navigate to URLsscreenshot.js- Capture screenshots (full page or element)click.js- Click elementsfill.js- Fill form fieldsevaluate.js- Execute JavaScript in page context
Analysis & Monitoring
snapshot.js- Extract interactive elements with metadataconsole.js- Monitor console messages/errorsnetwork.js- Track HTTP requests/responsesperformance.js- Measure Core Web Vitals + record traces
Usage Patterns
Single Command
pwd # Should show current working directory
cd .claude/skills/chrome-devtools/scripts
node screenshot.js --url https://example.com --output ./docs/screenshots/page.png
Important: Always save screenshots to ./docs/screenshots directory.
Automatic Image Compression
Screenshots are automatically compressed if they exceed 5MB to ensure compatibility with Gemini API and Claude Code (which have 5MB limits). This uses ImageMagick internally:
# Default: auto-compress if >5MB
node screenshot.js --url https://example.com --output page.png
# Custom size threshold (e.g., 3MB)
node screenshot.js --url https://example.com --output page.png --max-size 3
# Disable compression
node screenshot.js --url https://example.com --output page.png --no-compress
Compression behavior:
- PNG: Resizes to 90% + quality 85 (or 75% + quality 70 if still too large)
- JPEG: Quality 80 + progressive encoding (or quality 60 if still too large)
- Other formats: Converted to JPEG with compression
- Requires ImageMagick installed (see imagemagick skill)
Output includes compression info:
{
"success": true,
"output": "/path/to/page.png",
"compressed": true,
"originalSize": 8388608,
"size": 3145728,
"compressionRatio": "62.50%",
"url": "https://example.com"
}
Chain Commands (reuse browser)
# Keep browser open with --close false
node navigate.js --url https://example.com/login --close false
node fill.js --selector "#email" --value "user@example.com" --close false
node fill.js --selector "#password" --value "secret" --close false
node click.js --selector "button[type=submit]"
Parse JSON Output
# Extract specific fields with jq
node performance.js --url https://example.com | jq '.vitals.LCP'
# Save to file
node network.js --url https://example.com --output /tmp/requests.json
Execution Protocol
Working Directory Verification
BEFORE executing any script:
- Check current working directory with
pwd - Verify in
.claude/skills/chrome-devtools/scripts/directory - If wrong directory,
cdto correct location - Use absolute paths for all output files
Example:
pwd # Should show: .../chrome-devtools/scripts
# If wrong:
cd .claude/skills/chrome-devtools/scripts
Output Validation
AFTER screenshot/capture operations:
- Verify file created with
ls -lh <output-path> - Read screenshot using Read tool to confirm content
- Check JSON output for success:true
- Report file size and compression status
Example:
node screenshot.js --url https://example.com --output ./docs/screenshots/page.png
ls -lh ./docs/screenshots/page.png # Verify file exists
# Then use Read tool to visually inspect
- Restart working directory to the project root.
Error Recovery
If script fails:
- Check error message for selector issues
- Use snapshot.js to discover correct selectors
- Try XPath selector if CSS selector fails
- Verify element is visible and interactive
Example:
# CSS selector fails
node click.js --url https://example.com --selector ".btn-submit"
# Error: waiting for selector ".btn-submit" failed
# Discover correct selector
node snapshot.js --url https://example.com | jq '.elements[] | select(.tagName=="BUTTON")'
# Try XPath
node click.js --url https://example.com --selector "//button[contains(text(),'Submit')]"
Common Mistakes
❌ Wrong working directory → output files go to wrong location ❌ Skipping output validation → silent failures ❌ Using complex CSS selectors without testing → selector errors ❌ Not checking element visibility → timeout errors
✅ Always verify pwd before running scripts
✅ Always validate output after screenshots
✅ Use snapshot.js to discover selectors
✅ Test selectors with simple commands first
Common Workflows
Web Scraping
node evaluate.js --url https://example.com --script "
Array.from(document.querySelectorAll('.item')).map(el => ({
title: el.querySelector('h2')?.textContent,
link: el.querySelector('a')?.href
}))
" | jq '.result'
Performance Testing
PERF=$(node performance.js --url https://example.com)
LCP=$(echo $PERF | jq '.vitals.LCP')
if (( $(echo "$LCP < 2500" | bc -l) )); then
echo "✓ LCP passed: ${LCP}ms"
else
echo "✗ LCP failed: ${LCP}ms"
fi
Form Automation
node fill.js --url https://example.com --selector "#search" --value "query" --close false
node click.js --selector "button[type=submit]"
Error Monitoring
node console.js --url https://example.com --types error,warn --duration 5000 | jq '.messageCount'
Script Options
All scripts support:
--headless false- Show browser window--close false- Keep browser open for chaining--timeout 30000- Set timeout (milliseconds)--wait-until networkidle2- Wait strategy
See ./scripts/README.md for complete options.
Output Format
All scripts output JSON to stdout:
{
"success": true,
"url": "https://example.com",
... // script-specific data
}
Errors go to stderr:
{
"success": false,
"error": "Error message"
}
Finding Elements
Use snapshot.js to discover selectors:
node snapshot.js --url https://example.com | jq '.elements[] | {tagName, text, selector}'
Troubleshooting
Common Errors
"Cannot find package 'puppeteer'"
- Run:
npm installin the scripts directory
"error while loading shared libraries: libnss3.so" (Linux/WSL)
- Missing system dependencies
- Fix: Run
./install-deps.shin scripts directory - Manual install:
sudo apt-get install -y libnss3 libnspr4 libasound2t64 libatk1.0-0 libatk-bridge2.0-0 libcups2 libdrm2 libxkbcommon0 libxcomposite1 libxdamage1 libxfixes3 libxrandr2 libgbm1
"Failed to launch the browser process"
- Check system dependencies installed (Linux/WSL)
- Verify Chrome downloaded:
ls ~/.cache/puppeteer - Try:
npm rebuildthennpm install
Chrome not found
- Puppeteer auto-downloads Chrome during
npm install - If failed, manually trigger:
npx puppeteer browsers install chrome
Script Issues
Element not found
- Get snapshot first to find correct selector:
node snapshot.js --url <url>
Script hangs
- Increase timeout:
--timeout 60000 - Change wait strategy:
--wait-until loador--wait-until domcontentloaded
Blank screenshot
- Wait for page load:
--wait-until networkidle2 - Increase timeout:
--timeout 30000
Permission denied on scripts
- Make executable:
chmod +x *.sh
Screenshot too large (>5MB)
- Install ImageMagick for automatic compression
- Manually set lower threshold:
--max-size 3 - Use JPEG format instead of PNG:
--format jpeg --quality 80 - Capture specific element instead of full page:
--selector .main-content
Compression not working
- Verify ImageMagick installed:
magick -versionorconvert -version - Check file was actually compressed in output JSON:
"compressed": true - For very large pages, use
--selectorto capture only needed area
Reference Documentation
Detailed guides available in ./references/:
- CDP Domains Reference - 47 Chrome DevTools Protocol domains
- Puppeteer Quick Reference - Complete Puppeteer API patterns
- Performance Analysis Guide - Core Web Vitals optimization
Advanced Usage
Custom Scripts
Create custom scripts using shared library:
import { getBrowser, getPage, closeBrowser, outputJSON } from './lib/browser.js';
// Your automation logic
Direct CDP Access
const client = await page.createCDPSession();
await client.send('Emulation.setCPUThrottlingRate', { rate: 4 });
See reference documentation for advanced patterns and complete API coverage.
External Resources
Source
git clone https://github.com/Microck/ordinary-claude-skills/blob/main/skills_all/chrome-devtools/SKILL.mdView on GitHub Overview
Chrome DevTools Agent provides executable Puppeteer scripts to automate browsers and return JSON for easy parsing. It supports core automation (navigate, screenshot, click, fill, evaluate) as well as analysis and monitoring (network, performance, console, snapshot), making it ideal for web scraping, form automation, debugging, and performance checks.
How This Skill Works
Scripts live under .claude/skills/chrome-devtools/scripts and run with Node (puppeteer, debug, yargs). The tool outputs JSON by design and supports single commands or chained sessions, with optional ImageMagick-based screenshot compression to keep file sizes manageable.
When to Use It
- Automating repetitive browser tasks such as filling forms, navigating pages, and submitting actions.
- Web scraping or data extraction with structured JSON output for downstream processing.
- Performance and network analysis by capturing page metrics, requests, and traces.
- Debugging client-side code by monitoring console output and evaluating JavaScript in the page context.
- Visual testing and verification using automated screenshots, with optional compression to stay under size limits.
Quick Start
- Step 1: cd .claude/skills/chrome-devtools/scripts
- Step 2: npm install
- Step 3: node navigate.js --url https://example.com
Best Practices
- Always run pwd and verify you are in the correct script directory before executing any command.
- Follow the installation order: Linux/WSL system dependencies, then Node packages, then ImageMagick if you plan to compress screenshots.
- Enable automatic screenshot compression to keep images under 5MB when working with Gemini/Claude integrations.
- Use the chain-commands pattern (navigate → fill → click) with --close false to reuse a single browser session.
- Rely on the JSON outputs for reliable parsing and easier integration with other automation tools.
Example Use Cases
- Navigate to a URL and capture a full-page screenshot: node navigate.js --url https://example.com; node screenshot.js --url https://example.com --output ./docs/screenshots/page.png
- Fill a login form and submit: node navigate.js --url https://example.com/login; node fill.js --selector '#email' --value 'user@example.com'; node fill.js --selector '#password' --value 'secret'; node click.js --selector 'button[type=submit]'
- Monitor network activity while loading a page: node network.js --url https://example.com
- Evaluate JavaScript in the page to extract data: node evaluate.js --script "document.title"
- Chain actions in a single browser session: navigate → fill → click with --close false across multiple steps