mcp -fetch-python
MCP server from tatn/mcp-server-fetch-python
claude mcp add --transport stdio tatn-mcp-server-fetch-python uvx mcp-server-fetch-python \ --env MODEL_NAME="gpt-4o" \ --env OPENAI_API_KEY="your-openai-api-key" \ --env PYTHONIOENCODING="utf-8"
How to use
This MCP server, mcp-server-fetch-python, provides four content extraction and transformation tools for web content. You can fetch raw text directly from URLs, render fully rendered HTML via a headless browser, convert pages to Markdown, and perform AI-powered content extraction from media files (images and videos). The media extraction tool requires an OpenAI API key to enable AI-based analysis. To use these tools, configure the server in your MCP client (for example Claude Desktop) so it can launch via uvx with the appropriate package name. Once running, you can query a URL with one of the tools to obtain structured outputs like plain text, rendered HTML, Markdown, or AI-extracted media insights.
How to install
Prerequisites:
- Git must be installed to clone the repository.
- uv (Python-based MCP runner) installed and accessible in your PATH.
Installation steps:
- Clone the repository: git clone https://github.com/tatn/mcp-server-fetch-python.git
- Navigate into the project directory: cd mcp-server-fetch-python
- Install and build/run with uv (as described in the repo): uv sync uv build
Optional: configure Claude Desktop to run the server by adding the following configuration (example path replacements shown):
- For Claude Desktop on macOS: "mcpServers": { "mcp-server-fetch-python": { "command": "uvx", "args": [ "mcp-server-fetch-python" ] } }
- For Claude Desktop on Windows: "mcpServers": { "mcp-server-fetch-python": { "command": "uvx", "args": [ "mcp-server-fetch-python" ] } }
If you prefer running locally via uv directly, you can also start the server with:
- From the repository root, run: uv sync uv build uv run mcp-server-fetch-python
Additional notes
Environment variables:
- OPENAI_API_KEY: Required for get-markdown-from-media tool (AI-powered analysis of media). Set this in your environment or in your MCP config under env.
- PYTHONIOENCODING: Set to utf-8 if you encounter encoding issues.
- MODEL_NAME: Model name to use; defaults to gpt-4o if not specified. Common issues:
- If get-markdown-from-media returns an error about missing API key, ensure OPENAI_API_KEY is correctly set in the environment or MCP config.
- When rendering JavaScript-heavy pages, use get-rendered-html for accurate content.
- For Claude Desktop integration, ensure the paths in the config point to the correct local installation or cloned repository path.
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP