deep-research
The Deep Research Assistant is meticulously crafted on Mastra's modular, scalable architecture, designed for intelligent orchestration and seamless human-AI interaction. It's built to tackle complex research challenges autonomously.
claude mcp add --transport stdio ssdeanx-deep-research node server.js \ --env PORT="3000 (or your desired port)" \ --env LOG_LEVEL="info" \ --env MAStra_API_KEY="Mastra API key" \ --env OPENAI_API_KEY="your-openai-api-key" \ --env MAStra_BASE_URL="Mastra orchestration base URL"
How to use
Deep Research is an MCP server that powers an autonomous research assistant workflow with Mastra orchestration, agent orchestration, and graph-based RAG capabilities. It enables multiple AI agents (Research Agent, Report Agent, Evaluation Agent, Learning Extraction Agent, Web Summarization Agent, RAG Agent, GitHub Agent, Monitor Agent, Planning Agent, Quality Assurance Agent) to collaborate on deep research tasks, integrate with vector search, and produce structured reports. You can start the server and interact with its endpoints to run end-to-end research pipelines, trigger workflows, and inspect results from a centralized interface. The system is designed for human-in-the-loop review where necessary, allowing you to approve or refine agent decisions before final outputs are produced.
How to install
Prerequisites:
- Node.js 18+ (preferably Node.js 20.x as indicated by the project badges)
- npm (comes installed with Node.js) or yarn
- Git
- Access to external services (e.g., OpenAI API key, Mastra credentials) if you want to run end-to-end
Installation steps:
-
Clone the repository: git clone https://github.com/ssdeanx/deep-research.git cd deep-research
-
Install dependencies: npm install
or if using yarn:
yarn install
-
Configure environment: Create a .env file or set environment variables as described in the mcp_config:
- PORT
- OPENAI_API_KEY
- MAStra_BASE_URL
- MAStra_API_KEY
- LOG_LEVEL
-
Build (if applicable): npm run build
Depending on the project setup, you may skip this if the server runs from source.
-
Run the server: npm start
or if you use the explicit node command from mcp_config:
node server.js
-
Verify startup by visiting http://localhost:3000 or the port you configured. Ensure the MCP endpoint is reachable and the Mastra orchestration services are accessible.
Additional notes
Tips and common issues:
- Ensure your OpenAI API key and Mastra credentials are valid and have the required permissions.
- If the server fails to start due to port in use, change the PORT value in the environment or in mcp_config to an available port.
- Check logs for OpenTelemetry tracing or observability data if you enable tracing.
- For authentication or restricted endpoints, implement proper API key handling or OAuth as per your security policies.
- If running in a containerized environment, map ports and provide necessary env vars via docker run or compose files.
- The MCP configuration assumes a Node.js-based backend; if the repository shifts to a different runtime, update the mcp_config accordingly (e.g., replace command/args with a Python uvx or docker invocation).
- Keep dependencies updated to align with the latest Mastra and MCP protocol requirements to avoid compatibility issues.
Related MCP Servers
ai-guide
程序员鱼皮的 AI 资源大全 + Vibe Coding 零基础教程,分享大模型选择指南(DeepSeek / GPT / Gemini / Claude)、最新 AI 资讯、Prompt 提示词大全、AI 知识百科(RAG / MCP / A2A)、AI 编程教程、AI 工具用法(Cursor / Claude Code / OpenClaw / TRAE / Lovable / Agent Skills)、AI 开发框架教程(Spring AI / LangChain)、AI 产品变现指南,帮你快速掌握 AI 技术,走在时代前沿。本项目为开源文档版本,已升级为鱼皮 AI 导航网站
deep-research
Use any LLMs (Large Language Models) for Deep Research. Support SSE API and MCP server.
coplay-unity-plugin
Unity plugin for Coplay
adal-cli
The self-evolving coding agent that learns from your entire team and codebase. Less syncing. Less waiting. Deliver at the speed of thought.
gopls
MCP server for golang projects development: Expand AI Code Agent ability boundary to have a semantic understanding and determinisic information for golang projects.
mcp-config-manager
Manage MCP server configs across Claude, Gemini & other AI systems. Interactive CLI for server enable/disable, preset management & config sync.