Get the FREE Ultimate OpenClaw Setup Guide →

crawl4ai

🚀 High-performance MCP Server for Crawl4AI - Enable AI assistants to access web scraping, crawling, and deep research via Model Context Protocol. Faster and more efficient than FireCrawl!

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio bjornmelin-crawl4ai-mcp-server node dist/index.js \
  --env PORT="8787" \
  --env API_VERSION="v1" \
  --env MAX_CRAWL_DEPTH="3" \
  --env MAX_CRAWL_PAGES="100" \
  --env OAUTH_CLIENT_ID="" \
  --env OAUTH_CLIENT_SECRET=""

How to use

Crawl4AI MCP Server exposes a set of web crawling and data extraction tools that can be accessed by AI assistants through the Model Context Protocol. The server implements tools for crawling web pages, retrieving crawl data, listing crawls, performing text searches across indexed content, and extracting structured information from URLs. It is designed to work as a Cloudflare Workers-backed MCP server but can also be interacted with locally via the provided development setup. To use it with MCP clients (for example Claude Desktop), point the client at the server URL and authorize via OAuth or API key, then browse or invoke the available tools such as crawl, getCrawl, listCrawls, search, and extract. Each tool accepts appropriate inputs (e.g., a starting URL for crawl, a query for search, or a URL for extraction) and returns structured results that can be consumed by the AI assistant to continue the task.

During development and testing you can run the server locally using npm run dev (with the server available at http://localhost:8787) or spin up a Docker-based environment with docker-compose. The MCP integration ensures that tools are discoverable by MCP clients and that authentication is enforced via OAuth or Bearer tokens. When integrated with Crawl4AI, the server enables high-performance web data acquisition, deep research, and structured content extraction to power AI-assisted workflows.

How to install

Prerequisites

  • Node.js v18 or newer
  • npm (comes with Node.js) or pnpm/yarn if you prefer
  • Wrangler CLI for Cloudflare Workers (optional for local development with CF Workers)
  • Docker (optional for Docker-based development)

Installation steps

  1. Clone the repository

  2. Install dependencies

    • npm install
  3. Build the project (if a build step is required by the setup)

    • npm run build (if defined in package.json) or configure the TypeScript build to emit to dist/
  4. Configure Cloudflare Wrangler (optional for CF Workers deployment)

    • Copy example env and seed values if needed
    • wrangler login
    • wrangler secret put OAUTH_CLIENT_ID
    • wrangler secret put OAUTH_CLIENT_SECRET
  5. Run locally (development)

  6. Optional Docker-based local development

Notes

  • Ensure environment variables for OAuth or API key authentication are set before starting the server.
  • If you customize MAX_CRAWL_DEPTH, MAX_CRAWL_PAGES, or API_VERSION, update the corresponding environment variables accordingly.

Additional notes

Tips and common issues:

  • If the MCP tools fail to load in your MCP client, verify that the Cloudflare Worker URL (or the local server URL) is reachable and that authentication tokens are valid.
  • When running in Docker, make sure to copy and set up the .env file correctly so that API keys and OAuth credentials are present.
  • If using the RDF-like output or extraction features, check that the CSS selectors or extraction rules are properly defined in the tool schemas.
  • The server exposes several tools (crawl, getCrawl, listCrawls, search, extract); ensure your MCP client is configured to discover and call these tools by their exact names.
  • Configure CRAWL_DATA KV namespace in CloudFlare if you intend to store crawl results persistently.
  • Review docs under docs/ for migration plans, architecture, and implementation details to align usage with the latest capabilities.

Related MCP Servers

Sponsor this space

Reach thousands of developers