mcp -fetchplus
MCP server from unclevicky/mcp-server-fetchplus
claude mcp add --transport stdio unclevicky-mcp-server-fetchplus python -m mcp_server_fetchplus.server \ --env PORT="8000 (default) - port to run the MCP server on" \ --env LOG_LEVEL="INFO (default) or DEBUG"
How to use
This MCP server, FetchPlus, provides a web-accessible tool to fetch the content of a given URL and convert it into Markdown. It preserves images when converting and automatically segments long documents to fit within the context limits of large language models. Clients can call the /mcp/tool/fetch endpoint to retrieve a Markdown representation of a webpage. The server is designed to be started locally or in a containerized environment and exposes a simple API for fetching content by URL. After starting, you can submit a JSON payload containing the target URL to obtain a Markdown-formatted page with embedded assets where supported.
How to install
Prerequisites:
- Python 3.8+ installed on your system
- Internet access to install the package from PyPI
Installation steps:
- Install the package from PyPI:
pip install mcp-server-fetchplus - Run the MCP server (example):
python -m mcp_server_fetchplus.server - By default the server will run on http://localhost:8000. If you need a different port, set the PORT environment variable before starting:
PORT=8080 python -m mcp_server_fetchplus.server - (Optional) If you prefer using a virtual environment:
python -m venv venv source venv/bin/activate # on Unix/macOS .\venv\Scripts\activate # on Windows pip install mcp-server-fetchplus
Additional notes
Notes:
- The server exposes the /mcp/tool/fetch endpoint for fetching content from a URL and converting it to Markdown. Ensure your requests include a JSON payload like {"url": "https://example.com"}.
- Images referenced in the webpage may be preserved if accessible; ensure network access to the image URLs.
- Long documents are automatically split to fit within typical LLM context windows. If you encounter very long pages, you may receive multiple Markdown chunks.
- If you encounter port conflicts, adjust the PORT environment variable before starting the server.
- The repository structure shows a Python-based MCP server; there is no npm package involved (npm_package is null).
Environment variables to consider:
- PORT: Port number to run the server on (default 8000)
- LOG_LEVEL: Logging level (e.g., INFO, DEBUG)
Common issues:
- Module not found errors: ensure you installed the package and are activating the correct Python environment.
- 404 or connection errors: verify the server is running and the correct URL/port are used.
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP