deep-research
MCP Deep Research Server using Gemini creating a Research AI Agent
claude mcp add --transport stdio ssdeanx-deep-research-mcp-server node src/mcp-server.ts \ --env GOOGLE_CSE_ID="your Google Custom Search Engine ID (optional)" \ --env GEMINI_API_KEY="your Gemini API key" \ --env GOOGLE_API_KEY="your Google API key for Search Grounding"
How to use
Deep Research MCP Server exposes an MCP-compatible interface that orchestrates a Gemini 2.5 Flash-powered deep research pipeline. It supports iterative query refinement, SERP-style query generation, semantic content splitting, batched model calls with tooling (Google Search Grounding, Code Execution, Functions), and produces a structured Markdown report as the final output. The server is designed to be integrated with MCP-aware clients and the Inspector tool, enabling researchers to embed the agent into larger automation workflows. To use it, start the MCP server locally and connect an MCP client to the provided entry point; then issue tasks that leverage the deep research workflow to perform iterative investigations, extract learnings, and generate professional reports. Features like depth/breadth controls and token-aware chunking help tailor exploration and outputs to your research scope. Tools can be toggled via environment flags to enable or disable specific capabilities (e.g., search grounding or code execution) during the research cycles.
How to install
Prerequisites:
- Node.js v22.x installed
- Git installed
- Access to Gemini 2.5 Flux API (Gemini API key)
- Optional: Google Search Grounding API key if you plan to use the SERP tool
-
Clone the repository git clone https://github.com/ssdeanx-deep-research-mcp-server.git cd ssdeanx-deep-research-mcp-server
-
Install dependencies npm install
-
Configure environment
- Create a copy of the example env and populate keys
cp .env.example .env
Edit .env to add GEMINI_API_KEY, GOOGLE_API_KEY, and GOOCLE_CSE_ID if needed
- Create a copy of the example env and populate keys
cp .env.example .env
-
Build (if a build step exists) and start the MCP server npm run build # if a build script exists npm start # or run the MCP server directly via Node (see mcp_config for entry)
-
Verify the server is running Check the console for the MCP server startup message and accessible endpoints
Notes:
- If you run directly from TypeScript sources, you may need ts-node or a build step to transpile to JavaScript first.
- Ensure your environment variables are securely stored and not committed to version control.
Additional notes
Tips and common considerations:
- Environment variables: GEMINI_API_KEY is mandatory for Gemini-based reasoning. GOOGLE_API_KEY and GOOCLE_CSE_ID (if using Google Search Grounding) are optional depending on tool usage.
- Tools: You can enable Google Search Grounding, Code Execution, and Functions via flags or environment settings. If a tool is disabled, the corresponding capability will be unavailable in the research pipeline.
- Output: The server emphasizes deterministic, Zod-validated JSON outputs and structured Markdown reports. If outputs appear inconsistent, check batching settings, token limits, and cache configurations.
- MCP integration: Designed to work with MCP-aware clients and Inspector for testing; ensure your MCP client targets the server entry (src/mcp-server.ts) or the compiled entry equivalent in your deployment.
- Performance: The pipeline uses concurrency-limited batching and LRU caches. Adjust concurrency or cache sizes if you run into rate limits or memory constraints.
Related MCP Servers
gemini-cli
An open-source AI agent that brings the power of Gemini directly into your terminal.
ai-guide
程序员鱼皮的 AI 资源大全 + Vibe Coding 零基础教程,分享大模型选择指南(DeepSeek / GPT / Gemini / Claude)、最新 AI 资讯、Prompt 提示词大全、AI 知识百科(RAG / MCP / A2A)、AI 编程教程、AI 工具用法(Cursor / Claude Code / OpenClaw / TRAE / Lovable / Agent Skills)、AI 开发框架教程(Spring AI / LangChain)、AI 产品变现指南,帮你快速掌握 AI 技术,走在时代前沿。本项目为开源文档版本,已升级为鱼皮 AI 导航网站
hexstrike-ai
HexStrike AI MCP Agents is an advanced MCP server that lets AI agents (Claude, GPT, Copilot, etc.) autonomously run 150+ cybersecurity tools for automated pentesting, vulnerability discovery, bug bounty automation, and security research. Seamlessly bridge LLMs with real-world offensive security capabilities.
cursor-talk-to-figma
TalkToFigma: MCP integration between AI Agent (Cursor, Claude Code) and Figma, allowing Agentic AI to communicate with Figma for reading designs and modifying them programmatically.
kindly-web-search
Kindly Web Search MCP Server: Web search + robust content retrieval for AI coding tools (Claude Code, Codex, Cursor, GitHub Copilot, Gemini, etc.) and AI agents (Claude Desktop, OpenClaw, etc.). Supports Serper, Tavily, and SearXNG.
c4-genai-suite
c4 GenAI Suite