ollama
MCP server from NewAITees/ollama-MCP-server
claude mcp add --transport stdio newaitees-ollama-mcp-server uv --directory /path/to/ollama-MCP-server run ollama-MCP-server
How to use
This MCP server acts as a bridge between a local Ollama LLM instance and MCP-compatible client applications. It exposes a set of tools that let clients decompose complex tasks, evaluate results against given criteria, and run Ollama models directly through the MCP protocol. The server handles task management, results evaluation, and model execution, while offering structured error messages and performance optimizations such as connection pooling and an LRU cache to improve responsiveness when similar requests are repeated. Typical workflows involve decomposing a difficult problem into sub-tasks, executing model-powered steps, and then evaluating or revising outputs based on defined criteria.
Available tools include:
- add-task: Create a new task with required fields name and description (plus optional priority, deadline, tags).
- decompose-task: Use Ollama to break a task (task_id) into subtasks with a specified granularity (high, medium, low) and optional max_subtasks.
- evaluate-result: Assess a result (result_id) against a criteria object, with an optional detailed flag to obtain granular feedback.
- run-model: Execute an Ollama model with a given prompt and optional temperature and max_tokens settings. The server validates parameters against the model execution schema and returns the model’s response. These tools enable end-to-end task management, from planning through execution to evaluation, all via MCP-compatible calls.
Clients interact with the server by addressing the mcpServers/ollama-MCP-server endpoint and invoking the appropriate tool_name with a structured arguments payload. The server also supports advanced model selection via environment configuration and MCP settings, so you can specify the default model or a set of available models when running tasks.
How to install
Prerequisites:
- Python (and pip) installed and available on your PATH
- Ollama installed and a compatible model available locally
- Optional: uv or uvx runtime if you intend to use those execution environments
Installation steps:
-
Install the MCP server package (Python): pip install ollama-mcp-server
-
Ensure Ollama is running locally and accessible (default host/port): -OLLAMA_HOST=http://localhost:11434
-
Run the MCP server (example using uv): uv --directory /path/to/ollama-MCP-server run ollama-MCP-server
-
Optional: Configure environment variables for model defaults and logging:
- DEFAULT_MODEL=llama3
- OLLAMA_HOST=http://localhost:11434
- LOG_LEVEL=info
-
(Optional) Test a quick start call using the MCP tool interface described in the README to verify the server is reachable and the tools are functioning.
Notes:
- If you prefer a different runtime, you can switch to uvx for Python-based execution as demonstrated in the README configurations.
- Ensure the directory you point uv/uvx to contains the ollama-MCP-server entry point and relevant configuration files.
Additional notes
Tips and common considerations:
- Model selection priority: tool calls specify the model via the tool arguments, followed by MCP config env, then environment variables (OLLAMA_DEFAULT_MODEL), and finally a hard-coded default (llama3).
- Environment variables you may set include OLLAMA_HOST, DEFAULT_MODEL, and LOG_LEVEL. The server will log and validate the available models at startup.
- The server provides structured error responses to help clients diagnose issues (e.g., missing task_id, invalid granularity, or an unavailable model list).
- For development and debugging, MCP Inspector can be used to visualize and inspect MCP messages and tool calls during tests.
- Performance tuning is possible via the internal config (cache_size, max_connections, request_timeout, etc.) to optimize response times under load.
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP