Get the FREE Ultimate OpenClaw Setup Guide →

mcp-client-for-ollama

A text-based user interface (TUI) client for interacting with MCP servers using Ollama. Features include agent mode, multi-server, model switching, streaming responses, tool management, human-in-the-loop, thinking mode, model params config, MCP prompts, custom system prompt and saved preferences. Built for developers working with local LLMs.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio jonigl-mcp-client-for-ollama python -m ollmcp \
  --env PYTHONUNBUFFERED="1"

How to use

MCP Client for Ollama (ollmcp) is a Python-based CLI application that lets local Ollama LLMs call MCP servers for tool execution and workflow automation. It connects Ollama-hosted models to one or more MCP servers, enabling tool discovery, invocation, and streaming responses directly from your local environment. The client supports multiple transport types and a rich interactive interface, including agent mode for iterative tool calls, human-in-the-loop safety controls, and dynamic model/tool configuration. With ollmcp, you can manage tools, models, and server connections in real time, making it easier to integrate local LLMs with external APIs and custom tooling through MCP.

To use it, install ollmcp, ensure Ollama is running locally, and start the client. By default the client auto-discovers MCP server configurations or you can specify servers explicitly. The toolset includes MCP prompts, history management, and performance metrics to help you monitor tool usage and model behavior. You can connect to multiple MCP servers simultaneously, enable/disable tools on a per-session basis, and leverage features like agent mode for multi-step tool execution and HIL safety checks during tool calls.

How to install

Prerequisites:

  • Python 3.10+ (recommended) and a Python environment
  • Ollama installed and running locally
  • Optional: uv package manager for quick install convenience

Install using pip (recommended):

  1. Create and activate a Python virtual environment (optional but recommended): python -m venv .venv source .venv/bin/activate # on macOS/Linux .venv\Scripts\activate # on Windows

  2. Install ollmcp from PyPI: pip install --upgrade ollmcp

  3. Run the MCP client for Ollama: ollmcp

Alternative quick start with uv (if you have uv installed):

  1. Install and run via uv: uvx ollmcp

If you prefer to run from source:

  1. Clone the repo and navigate to it
  2. Install dependencies: uv pip install .
  3. Run the client: uv run -m ollmcp

Prerequisites recap:

  • Python 3.10+
  • Ollama running locally
  • uv or Python packaging as your installer

Additional notes

Tips and common considerations:

  • Ollama must be running locally for ollmcp to connect to local models.
  • If you have multiple MCP servers, you can connect to them simultaneously and selectively enable/disable tools during sessions.
  • For best performance, run Ollama and ollmcp inside a dedicated virtual environment to avoid dependency conflicts.
  • If you encounter import errors or missing dependencies, ensure your Python environment is activated and re-run the install steps.
  • You can customize environment variables or integration settings via standard Python environment configuration (for example, PYTHONUNBUFFERED to ensure non-buffered I/O).
  • When experimenting with Agent Mode or HIL, start with a small loop limit to prevent runaway tool calls during development.

Related MCP Servers

Sponsor this space

Reach thousands of developers