MCP-ollama_server
Extends Model Context Protocol (MCP) to local LLMs via Ollama, enabling Claude-like tool use (files, web, email, GitHub, AI images) while keeping data private. Modular Python servers for on-prem AI. #LocalAI #MCP #Ollama
claude mcp add --transport stdio sethuram2003-mcp-ollama_server python -m mcp_ollama_server \ --env DEBUG="0 or 1" \ --env OLLAMA_HOST="localhost" \ --env OLLAMA_PORT="11434" \ --env OllamaModel="llama"
How to use
MCP-Ollama Server connects the MCP protocol to local Ollama-powered LLMs, enabling on‑premise AI capabilities with tool integration. It exposes modular services that let a local LLM perform tasks via the included modules: Calendar (calendar), File System (file_system), and the Client MCP module (client_mcp) that provides a unified interface for interacting with these services. The server routes MCP requests to the appropriate local Ollama-backed model, enabling local file access, calendar management, and other local tooling while keeping all data private on your infrastructure.
To use it, ensure Ollama is running and accessible on your machine, then start the MCP-Ollama Server using Python. Once running, you can interact with the MCP endpoint to invoke module capabilities—e.g., calendar operations, reading/writing files, or coordinating across modules through the client MCP interface. The architecture is modular and designed so you can deploy only the modules you need, while maintaining a single MCP-facing layer that abstracts the local Ollama model interactions.
How to install
Prerequisites
- Python 3.8+ installed
- Ollama installed and running locally (https://ollama.ai/)
- Git installed
- Optionally, uv (the Python runtime used by the repository)
Install steps
- Clone the repository:
git clone https://github.com/sethuram2003/mcp-ollama_server.git
cd mcp-ollama_server
- Create and activate a Python environment (recommended):
python3 -m venv venv
source venv/bin/activate # macOS/Linux
venv\Scripts\activate # Windows
- Install dependencies (if a requirements file exists, otherwise install as needed):
pip install -r requirements.txt || true
- Ensure Ollama is running locally (verify model availability, e.g., via Ollama UI or CLI). You should see an Ollama API listening on http://localhost:11434 by default.
- Run the MCP-Ollama Server:
# Using Python module entry point as described in the project
python -m mcp_ollama_server
- Verify the MCP endpoint is reachable and begin issuing MCP requests to the server.
Additional notes
Environment and configuration tips:
- Ollama must be reachable at the configured host/port (default localhost:11434).
- If you change the Ollama host/port, update the environment variables OLLAMA_HOST and OLLAMA_PORT accordingly.
- The server is designed to load modular services (calendar, file_system, client_mcp). Deploy only the modules you need.
- If running in a containerized environment, ensure the container can reach the Ollama service and that port mappings allow MCP traffic.
- For debugging, enable DEBUG in the environment to get verbose logs. Check logs for module load order and any dependency issues.
- If you encounter module import errors, install the Python dependencies listed in each module's pyproject.toml or requirements.txt and ensure the uv runtime is properly installed.
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP