Scrape
Verified@ivangdavila
npx machina-cli add skill @ivangdavila/scrape --openclawPre-Scrape Compliance Checklist
Before writing any scraping code:
- robots.txt — Fetch
{domain}/robots.txt, check if target path is disallowed. If yes, stop. - Terms of Service — Check
/terms,/tos,/legal. Explicit scraping prohibition = need permission. - Data type — Public factual data (prices, listings) is safer. Personal data triggers GDPR/CCPA.
- Authentication — Data behind login is off-limits without authorization. Never scrape protected content.
- API available? — If site offers an API, use it. Always. Scraping when API exists often violates ToS.
Legal Boundaries
- Public data, no login — Generally legal (hiQ v. LinkedIn 2022)
- Bypassing barriers — CFAA violation risk (Van Buren v. US 2021)
- Ignoring robots.txt — Gray area, often breaches ToS (Meta v. Bright Data 2024)
- Personal data without consent — GDPR/CCPA violation
- Republishing copyrighted content — Copyright infringement
Request Discipline
- Rate limit: Minimum 2-3 seconds between requests. Faster = server strain = legal exposure.
- User-Agent: Real browser string + contact email:
Mozilla/5.0 ... (contact: you@email.com) - Respect 429: Exponential backoff. Ignoring 429s shows intent to harm.
- Session reuse: Keep connections open to reduce server load.
Data Handling
- Strip PII immediately — Don't collect names, emails, phones unless legally justified.
- No fingerprinting — Don't combine data to identify individuals indirectly.
- Minimize storage — Cache only what you need, delete what you don't.
- Audit trail — Log what, when, where. Evidence of good faith if challenged.
For code patterns and robots.txt parser, see code.md
Overview
Scrape enables legal and responsible data collection from the web by enforcing robots.txt checks, terms of service review, and appropriate rate limiting. It distinguishes public data from personal data to avoid GDPR/CCPA triggers, and guides authentication, API use, and data-minimization practices. By combining these guards with a clear audit trail, it reduces legal risk while delivering useful data.
How This Skill Works
Before any scraping, it validates access rules by fetching robots.txt and reviewing terms, then it classifies data as public or personal and checks for login requirements or available APIs. During scraping, it adheres to rate limits (2-3 seconds between requests), uses a realistic User-Agent with contact info, and reuses sessions to minimize server load while respecting 429 responses with exponential backoff. It also processes data by stripping PII, avoiding fingerprinting, minimizing storage, and logging an audit trail for accountability.
When to Use It
- Targeting public-facing data that doesn't require login or credentials.
- When the site’s robots.txt or terms of service restrict scraping.
- When you must enforce rate limits and monitor for 429 responses.
- When handling data with privacy concerns (GDPR/CCPA) and minimizing PII.
- When an official API is available and should be used instead of scraping.
Quick Start
- Step 1: Review robots.txt, terms, and data type to ensure legality.
- Step 2: Implement rate-limited requests (2-3s) with a real User-Agent and contact email, and prefer API if available.
- Step 3: Strip PII, minimize storage, and maintain an audit trail for compliance.
Best Practices
- Always check robots.txt and terms (ToS) before scraping.
- Rate limit with a minimum 2-3 seconds between requests; implement exponential backoff on 429.
- Use a real User-Agent that includes a contact email.
- Strip PII and avoid fingerprinting; minimize data you store.
- Keep an audit trail and reuse sessions to reduce server load.
Example Use Cases
- Collecting product prices from public retailer pages without login.
- Aggregating public real estate or job listings while respecting ToS and robots.txt.
- Scraping data that is clearly public and non-identifiable to avoid GDPR/CCPA concerns.
- Using an API available on the site instead of scraping where possible.
- Maintaining an audit log to demonstrate compliance during data collection.