deep-research
A minimalist deep research framework for any OpenAI API compatible LLMs.
claude mcp add --transport stdio troyhantech-deep-research python main.py --env-file .env --config-file config.toml --mode mcp_stdio \ --env OPENAI_API_KEY="your-openai-api-key" \ --env OPENAI_BASE_URL="https://api.openai.com/v1/" \ --env LANGSMITH_API_KEY="your-langsmith-api-key" \ --env LANGSMITH_PROJECT="your-langsmith-project" \ --env LANGSMITH_TRACING="true" \ --env LANGSMITH_ENDPOINT="https://api.smith.langchain.com"
How to use
Deep Research is a Python-based, MCP-enabled research automation tool built on FastAPI that orchestrates multiple AI agents (Planner, Workers, and Reporter) to decompose complex research tasks into subtasks, execute them via MCP tools, and aggregate results into a final report. The system exposes both MCP transport (stdio/streamable_http) and an HTTP API, allowing integration with MCP clients or direct HTTP requests. To use it, deploy the server and connect with the MCP client of your choice, configuring the MCP transport and the set of tools to expose to workers. The workflow iterates through planning, parallel task execution, and reporting until a final report is produced. You can also run an HTTP API mode to directly POST tasks, fetch web-based reports, or query results programmatically.
How to install
Prerequisites:
- Python 3.10+ installed
- Git installed
- Optional: a virtual environment tool (venv) to isolate dependencies
Steps:
- Clone the repository
git clone https://github.com/troyhantech/deep-research.git
cd deep-research
- Create and activate a virtual environment (optional but recommended)
python -m venv venv
# On Windows
venv\Scripts\activate
# On macOS/Linux
source venv/bin/activate
- Install dependencies
pip install -r requirements.txt
# If you want to use uv as described in docs
pip install uv
uv pip install -r requirements.txt
- Prepare configuration
- Copy and customize environment variables
cp .env.example .env
- Copy example config if needed
cp config.toml.example config.toml
- Run the service
python main.py --mode mcp_stdio
Or run via the HTTP API mode
python main.py --mode http_api --host 0.0.0.0 --port 8000
Notes:
- Adjust the environment variables in .env to include your OpenAI API key and any LangSmith tracing/config keys if used.
- In MCP configuration, ensure the correct transport is enabled (mcp_stdio for local stdio, or mcp_streamable_http for remote access).
Additional notes
Tips and common issues:
- If using streamable_http, ensure the URL is reachable from the host running the server and that any API keys are included in the URL as shown in the README example.
- For large tasks, monitor max_reasoning_times and max_subtasks in config.toml to prevent runaway iterations.
- When using HTTP API mode, the default endpoints are /deep-research for task submission and /web for the UI; ensure firewall rules allow access to the configured port.
- If you encounter environment variable issues, double-check the .env path passed via --env-file and confirm the variables are loaded by the runtime.
- The MCP tools exposed to workers should be tailored to the problem domain to optimize context usage and performance (e.g., limit include_tools to the necessary set).
Related MCP Servers
ReActMCP
ReActMCP is a reactive MCP client that empowers AI assistants to instantly respond with real-time, Markdown-formatted web search insights powered by the Exa API.
mcp-python-interpreter
MCP Python Interpreter: run python code. Python-mcp-server, mcp-python-server, Code Executor
mcp-lite-dev
共学《MCP极简开发》项目代码
MCPSecBench
MCPSecBench: A Systematic Security Benchmark and Playground for Testing Model Context Protocols
finance
LLM-powered MCP server for building financial deep-research agents, integrating web search, Crawl4AI scraping, and entity extraction into composable analysis flows.
mlb
MCP server for advanced baseball analytics (statcast, fangraphs, baseball reference, mlb stats API) with client demo