mcp-jinaai-reader
š Model Context Protocol (MCP) tool for parsing websites using the Jina.ai Reader
claude mcp add --transport stdio spences10-mcp-jinaai-reader node -y mcp-jinaai-reader \ --env JINAAI_API_KEY="your-jinaai-api-key"
How to use
This MCP server integrates Jina.ai's Reader API to extract and convert web page content into a format optimized for large language models. The server exposes a single MCP tool named read_url, which takes a URL and returns structured, LLM-friendly text with options to control formatting, caching, and content scope. To use it, configure your MCP client to point at the jinaai-reader server and supply your Jina.ai API key. Once running, you can request content from any URL, with the response tailored for downstream NLP tasks such as summarization, indexing, or document understanding. The tool supports extracting and formatting documentation, web pages, and other content types while preserving structure and enabling optional summaries of links or images.
How to install
Prerequisites:
- Node.js (recommended LTS) and npm installed
- Access to a Jina.ai API key
Installation steps:
- Install dependencies and build the project
# Clone the repository (adjust path if needed)
git clone https://github.com/spences10/mcp-jinaai-reader.git
cd mcp-jinaai-reader
# Install dependencies
npm install
- Build the project (if required by the repo)
npm run build
- Run the server in development mode
npm run dev
- Run in production (example)
npm run build
npm start
Notes:
- Ensure you have a valid Jina.ai API key and set it in the environment variable JINAAI_API_KEY when running the server or in your MCP client configuration.
- The README examples show how to configure MCP clients for different environments; adapt commands to your deployment environment as needed.
Additional notes
Tips and considerations:
- The JINAAI_API_KEY environment variable is required for authentication with the Jina.ai Reader API.
- In non-Windows environments, you may run the node server directly or via an MCP orchestrator; the tool configuration uses the read_url command with URL, and optional parameters for formatting, timing, and selective content extraction.
- If you encounter authentication errors, double-check that the API key is correctly exported in the environment where the MCP client launches the server.
- The server supports options like no_cache, format (json or stream), timeout, and various CSS selector-based filters to focus or exclude content.
- When integrating with Claude Desktop or WSL, mirror the config example provided in the README to ensure proper key propagation.
Related MCP Servers
firecrawl
š„ Official Firecrawl MCP Server - Adds powerful web scraping and search to Cursor, Claude and any other LLM clients.
langchainjs -adapters
** THIS REPO HAS MOVED TO https://github.com/langchain-ai/langchainjs/tree/main/libs/langchain-mcp-adapters ** Adapters for integrating Model Context Protocol (MCP) tools with LangChain.js applications, supporting both stdio and SSE transports.
bitbucket
Bitbucket MCP - A Model Context Protocol (MCP) server for integrating with Bitbucket Cloud and Server APIs
mcp-arr
MCP server for *arr media management suite
mcp-batchit
š MCP aggregator for batching multiple tool calls into a single request. Reduces overhead, saves tokens, and simplifies complex operations in AI agent workflows.
crawlbase
Crawlbase MCP Server connects AI agents and LLMs with real-time web data. It powers Claude, Cursor, and Windsurf integrations with battle-tested web scraping, JavaScript rendering, and anti-bot protection enabling structured, live data inside your AI workflows.