mcp -ollama-deep-researcher
MCP server from Cam10001110101/mcp-server-ollama-deep-researcher
claude mcp add --transport stdio cam10001110101-mcp-server-ollama-deep-researcher node dist/index.js \ --env EXA_API_KEY="Your Exa API key (or placeholder)" \ --env TAVILY_API_KEY="Your Tavily API key (or placeholder)" \ --env PERPLEXITY_API_KEY="Your Perplexity API key (or placeholder)"
How to use
This MCP server implements a local, secure extension using the Model Context Protocol (MCP) over stdio. The Ollama Deep Researcher DXT Extension enables topic research by orchestrating web searches via Tavily, Perplexity, and Exa, combined with LLMs like Ollama or DeepSeek, all within a configurable research workflow. The server exposes standard MCP tools such as research, get_status, and configure. You can start a research session, poll its status, and adjust parameters like maxLoops, llmModel, and the search API to tailor results. Logs are emitted to stderr to aid debugging, and long-running subprocesses are guarded with timeouts to avoid hangs.
How to install
Prerequisites:
- Node.js and npm installed on your system
- Python (and pip) installed if you pursue Python-based tooling in the assistant logic
- Access keys for web search APIs (Tavily, Perplexity, Exa) if you intend to run web searches
Installation steps:
- Clone the repository and install dependencies
git clone <your-repo-url>
cd mcp-server-ollama-deep-researcher
npm install
- Install Python dependencies for the assistant (in case you run the Python logic directly)
cd src/assistant
pip install -r requirements.txt
# or use your preferred Python environment manager
- Set required environment variables for web search APIs (example)
export TAVILY_API_KEY=your_tavily_key
export PERPLEXITY_API_KEY=your_perplexity_key
export EXA_API_KEY=your_exa_key
- Build the TypeScript server (if needed)
npm run build
- Run the MCP server locally for testing
node dist/index.js
# Or run via your DXT host per its documentation
Additional notes
Notes:
- Ensure API keys are present in your environment when performing web searches; keys are not logged.
- The server uses MCP over stdio, so it should be integrated with a compatible DXT host.
- Research subprocesses are killed after 5 minutes to prevent hangs; adjust timeout settings if needed in your environment.
- Logs and errors are emitted to stderr to aid debugging; check logs when troubleshooting.
- If you modify configuration (maximizeLoops, llmModel, searchApi), use the configure tool via MCP to apply changes at runtime.
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP