scrapeless
Scrapeless Mcp Server
claude mcp add --transport stdio scrapeless-ai-scrapeless-mcp-server npx -y scrapeless-mcp-server \ --env SCRAPELESS_KEY="YOUR_SCRAPELESS_KEY"
How to use
Scrapeless MCP Server acts as an integration layer that lets large language models and AI agents interact with the web in real time. It connects models like ChatGPT and Claude to browser automation, web scraping, and various web services through the MCP protocol. The server exposes a set of universal tools (browser actions, search, and scrape capabilities) that can be orchestrated by your LLM prompts to navigate pages, extract data, and export results in formats such as HTML, Markdown, or screenshots. This makes it suitable for building autonomous web agents, research assistants, or coding copilots that need live web data and dynamic context.
To use it, configure either the Stdio (local) or Streamable HTTP (hosted API) transport as described in the setup steps. Through Stdio, you run the MCP server locally (typically via npx scrapeless-mcp-server) and provide your Scrapeless API key. Through Streamable HTTP, you host the MCP server behind an API gateway using an API token. The server exposes tools under the SN (Supported MCP Tools) such as google_search, google_trends, browser_* actions (goto, click, type, wait, screenshot, etc.), and scrape_html / scrape_markdown for pulling content from web pages. Integrations with Claude Desktop and Cursor IDE allow you to add Scrapeless as an MCP source directly in those apps, enabling conversational web control and data extraction within your usual workflows.
How to install
Prerequisites:
- Node.js and npm installed on your system
- Access to the Scrapeless API with a valid SCRAPELESS_KEY
Installation steps:
- Install and run the Scrapeless MCP Server via npm:
# Install and start the Scrapeless MCP Server using npx (no global install required)
npx -y scrapeless-mcp-server
- Provide your Scrapeless API key when prompted or set it in the environment:
export SCRAPELESS_KEY=YOUR_SCRAPELESS_KEY
- Alternatively, if you prefer to configure a dedicated mcp.json, create a file like this (adjust as needed):
{
"mcpServers": {
"Scrapeless MCP Server": {
"command": "npx",
"args": ["-y", "scrapeless-mcp-server"],
"env": {
"SCRAPELESS_KEY": "YOUR_SCRAPELESS_KEY"
}
}
}
}
- If you want to use the hosted Streamable HTTP mode, follow the hosted API setup in the README after starting the server.
Additional notes
Tips and common considerations:
- Keep your SCRAPELESS_KEY secure and avoid committing it to public repos.
- When using Streamable HTTP, ensure your host has a reliable HTTPS endpoint and configure proper CORS/headers as needed.
- You can customize browser session behavior via environment variables (BROWSER_PROFILE_ID, BROWSER_PROFILE_PERSIST, BROWSER_SESSION_TTL) for persistent sessions and faster reuse.
- The supported MCP tools include google_search, google_trends, and a comprehensive set of browser actions (goto, click, type, wait, screenshot, etc.) as well as scraping capabilities (scrape_html, scrape_markdown).
- For best results, combine browser actions with live sessions and the scrape tools to gather structured data and export it in your preferred format.
Related MCP Servers
iterm
A Model Context Protocol server that executes commands in the current iTerm session - useful for REPL and CLI assistance
mcp
Octopus Deploy Official MCP Server
furi
CLI & API for MCP management
editor
MCP Server for Phaser Editor
DoorDash
MCP server from JordanDalton/DoorDash-MCP-Server
mcp
MCP сервер для автоматического создания и развертывания приложений в Timeweb Cloud