Get the FREE Ultimate OpenClaw Setup Guide →

gptr

MCP server for enabling LLM applications to perform deep research via the MCP protocol

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio assafelovic-gptr-mcp python /absolute/path/to/gpt-researcher/gptr-mcp/server.py \
  --env OPENAI_API_KEY="your_openai_api_key" \
  --env TAVILY_API_KEY="your_tavily_api_key"

How to use

GPT Researcher MCP Server provides a research-focused assistant for LLM workflows. It exposes tools that perform deep web research, fast searches, report generation, and retrieval of context and sources to feed into your prompts. The primary tools include: deep_research for thorough investigations on a topic using trusted sources, quick_search for rapid web searches with concise snippets, write_report to generate structured reports from gathered results, get_research_sources to list the sources used, and get_research_context to fetch the full context of the current research. This setup is designed to integrate with Claude Desktop or other MCP clients via an SSE or HTTP transport, enabling session-based communication and streamlined query handling. After starting the server, obtain a session ID from the SSE endpoint and then send MCP messages to the messages endpoint using that session ID to drive your research tasks.

How to install

Prerequisites:

  • Python 3.11 or higher installed
  • API keys for services you plan to use (OpenAI, Tavily, etc.)
  • Access to the repository containing gptr-mcp (as referenced by the project)

Step-by-step installation:

  1. Clone the GPT Researcher repository and navigate to the project:
git clone https://github.com/assafelovic/gpt-researcher.git
cd gpt-researcher
  1. Install the gptr-mcp dependencies:
cd gptr-mcp
pip install -r requirements.txt
  1. Set up environment variables:
  • Copy the example env file to create a new .env:
cp .env.example .env
  • Edit the .env file and add your API keys and configure other settings:
OPENAI_API_KEY=your_openai_api_key
TAVILY_API_KEY=your_tavily_api_key

You can add any other environment variables needed for your GPT Researcher configuration.

Optional: If you want to run with Claude Desktop, ensure you configure its Claude Desktop config as documented in the repository and point to the appropriate server.py path.

  1. Run the MCP server directly or via Docker as described in the running instructions:
# Example: run directly
python server.py

Additional notes

Tips and caveats:

  • The server supports multiple transport modes (STDIO, SSE, and Streamable HTTP). Use MCP_TRANSPORT env var to force a transport if needed.
  • When running in Docker, the server binds to 0.0.0.0:8000 to allow container networking; use the provided docker commands to build and run.
  • You will typically interact with the SSE endpoint to obtain a session_id from /sse, then send MCP messages to /messages/?session_id=YOUR_SESSION_ID.
  • If you integrate with Claude Desktop, configure the mcpServers entry with command: python and the path to server.py, plus necessary API keys in env.
  • For production deployments, consider using Docker or docker-compose for easier orchestration and environment isolation.
  • Ensure your API keys and sensitive information are kept secure and not committed to version control.
  • If you encounter build or runtime issues, verify you’re using Python 3.11+ and the latest gpt-researcher requirements, since older environments may fail due to compatibility constraints.

Related MCP Servers

Sponsor this space

Reach thousands of developers