deep-research
Use any LLMs (Large Language Models) for Deep Research. Support SSE API and MCP server.
claude mcp add --transport stdio u14app-deep-research docker run -d --name deep-research -p 3333:3000 xiangfa/deep-research:latest
How to use
Deep Research is a next-generation, privacy-focused research assistant that runs locally in your browser and can be exposed as an MCP service. It leverages multiple AI models and web-search integrations to generate comprehensive research reports in minutes. The server-side Docker image provided (xiangfa/deep-research) enables you to run the entire app in a container, exposing a REST/SSE API surface that you can call from your MCP client or other services. It supports multi-LLM backends, local knowledge base capabilities, and a flexible API path for integration with other tools.
To use the MCP service, start the Docker container and connect to the exposed port (3333 by default). You can then interact with the server via the configured API endpoints to submit research requests, fetch results, manage knowledge bases, and perform re-research from a given stage. The project is designed to work with SaaS-style SSE API access as well as MCP-based integration, giving you the option to deploy as a standalone web app or as an API-backed service within your MCP ecosystem.
How to install
Prerequisites:
- Docker installed and running
- Optional: Docker Compose if you prefer a compose-based setup
Installation (Docker):
-
Pull and run the Docker image:
docker pull xiangfa/deep-research:latest docker run -d --name deep-research -p 3333:3000 xiangfa/deep-research:latest
-
Verify the container is running:
docker ps
-
Access the app API (default port 3333 mapped to 3000 inside the container) at http://localhost:3333
Alternative (Local development without Docker):
-
Ensure Node.js v18+ is installed
-
Clone the repository and install dependencies
-
Set up environment variables as described in the repo's docs
-
Run the dev server (example):
git clone https://github.com/u14app/deep-research.git cd deep-research pnpm install # or npm install or yarn install pnpm dev # or npm run dev or yarn dev
Prerequisites (for local development): Node.js 18+. A package manager (pnpm, npm, or yarn).
Environment variables:
- NEXT_PUBLIC_MODEL_LIST (for custom model lists in proxy mode)
- Other model/API keys as required by your deployment (Gemini, OpenAI, etc.)
Note: The Docker approach is recommended for MCP deployments to ensure consistent runtime and dependencies.
Additional notes
Tips and notes:
- The Docker image exposes port 3333 to access the app; adjust port mappings if needed.
- If you see a lag in image updates, this is normal due to build/deploy cycle times in Docker hubs.
- For custom model lists in proxy mode, set NEXT_PUBLIC_MODEL_LIST in your environment variables or .env file.
- The MCP deployment supports SSE API and MCP usage; you can adapt the container to your MCP orchestration workflow.
- If you’re deploying to cloud runners, consider setting additional environment variables for model keys, search engines, and knowledge-base sources as described in the project docs.
Related MCP Servers
SearChat
Search + Chat = SearChat(AI Chat with Search), Support OpenAI/Anthropic/VertexAI/Gemini, DeepResearch, SearXNG, Docker. AI对话式搜索引擎,支持DeepResearch, 支持OpenAI/Anthropic/VertexAI/Gemini接口、聚合搜索引擎SearXNG,支持Docker一键部署。
argo
ARGO is an open-source AI Agent platform that brings Local Manus to your desktop. With one-click model downloads, seamless closed LLM integration, and offline-first RAG knowledge bases, ARGO becomes a DeepResearch powerhouse for autonomous thinking, task planning, and 100% of your data stays locally. Support Win/Mac/Docker.
gptr
MCP server for enabling LLM applications to perform deep research via the MCP protocol
coplay-unity-plugin
Unity plugin for Coplay
mcp-llm
An MCP server that provides LLMs access to other LLMs
Archive-Agent
Find your files with natural language and ask questions.