Get the FREE Ultimate OpenClaw Setup Guide →

deep-research

A minimalist deep research framework for any OpenAI API compatible LLMs.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio troyhantech-deep-research python main.py --env-file .env --config-file config.toml --mode mcp_stdio \
  --env OPENAI_API_KEY="your-openai-api-key" \
  --env OPENAI_BASE_URL="https://api.openai.com/v1/" \
  --env LANGSMITH_API_KEY="your-langsmith-api-key" \
  --env LANGSMITH_PROJECT="your-langsmith-project" \
  --env LANGSMITH_TRACING="true" \
  --env LANGSMITH_ENDPOINT="https://api.smith.langchain.com"

How to use

Deep Research is a Python-based, MCP-enabled research automation tool built on FastAPI that orchestrates multiple AI agents (Planner, Workers, and Reporter) to decompose complex research tasks into subtasks, execute them via MCP tools, and aggregate results into a final report. The system exposes both MCP transport (stdio/streamable_http) and an HTTP API, allowing integration with MCP clients or direct HTTP requests. To use it, deploy the server and connect with the MCP client of your choice, configuring the MCP transport and the set of tools to expose to workers. The workflow iterates through planning, parallel task execution, and reporting until a final report is produced. You can also run an HTTP API mode to directly POST tasks, fetch web-based reports, or query results programmatically.

How to install

Prerequisites:

  • Python 3.10+ installed
  • Git installed
  • Optional: a virtual environment tool (venv) to isolate dependencies

Steps:

  1. Clone the repository
git clone https://github.com/troyhantech/deep-research.git
cd deep-research
  1. Create and activate a virtual environment (optional but recommended)
python -m venv venv
# On Windows
venv\Scripts\activate
# On macOS/Linux
source venv/bin/activate
  1. Install dependencies
pip install -r requirements.txt
# If you want to use uv as described in docs
pip install uv
uv pip install -r requirements.txt
  1. Prepare configuration
  • Copy and customize environment variables
cp .env.example .env
  • Copy example config if needed
cp config.toml.example config.toml
  1. Run the service
python main.py --mode mcp_stdio

Or run via the HTTP API mode

python main.py --mode http_api --host 0.0.0.0 --port 8000

Notes:

  • Adjust the environment variables in .env to include your OpenAI API key and any LangSmith tracing/config keys if used.
  • In MCP configuration, ensure the correct transport is enabled (mcp_stdio for local stdio, or mcp_streamable_http for remote access).

Additional notes

Tips and common issues:

  • If using streamable_http, ensure the URL is reachable from the host running the server and that any API keys are included in the URL as shown in the README example.
  • For large tasks, monitor max_reasoning_times and max_subtasks in config.toml to prevent runaway iterations.
  • When using HTTP API mode, the default endpoints are /deep-research for task submission and /web for the UI; ensure firewall rules allow access to the configured port.
  • If you encounter environment variable issues, double-check the .env path passed via --env-file and confirm the variables are loaded by the runtime.
  • The MCP tools exposed to workers should be tailored to the problem domain to optimize context usage and performance (e.g., limit include_tools to the necessary set).

Related MCP Servers

Sponsor this space

Reach thousands of developers