Get the FREE Ultimate OpenClaw Setup Guide →

mcp-client -example

MCP server from rajeevchandra/mcp-client-server-example

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio rajeevchandra-mcp-client-server-example python math_server.py \
  --env OLLAMA_HOST="http://localhost:11434" \
  --env MCP_LOG_LEVEL="INFO"

How to use

This MCP server example demonstrates a local AI agent workflow where an Ollama-backed LLM analyzes user queries and autonomously calls Python functions exposed by an MCP server. The math_server.py exposes two simple tools, add(a, b) and multiply(a, b), and the Ollama client coordinates execution by describing and invoking the appropriate tool via the MCP protocol. End-to-end, a user asks a math question, the client sends the query and available functions to the local LLM, the LLM selects a function and generates a tool_call, and the MCP server executes the function and returns the result to the user.

To use it, start the MCP server (math_server.py) and ensure Ollama is running with a model that supports tool calling. Then start the MCP client script (ollama_client.py) to enable the LLM to reason about which tool to invoke. After setup, you can ask questions like “What is 5 + 8?” or “Multiply 7 and 9,” and the system will respond with the computed results. The workflow runs entirely locally, without external dependencies beyond the local LLM and Python runtime.

How to install

Prerequisites:

  • Python 3.8+ installed on your machine
  • Pip available in your PATH
  • Ollama installed and running
  1. Create a virtual environment (optional but recommended):
python -m venv venv
source venv/bin/activate  # Linux/macOS
venv\Scripts\activate     # Windows
  1. Install required Python packages:
pip install "mcp[cli] @ git+https://github.com/awslabs/mcp.git" openai==0.28 httpx
  1. Ensure Ollama is running the model you want (e.g., llama3):
ollama run llama3
  1. Run the MCP server:
python math_server.py
  1. Run the MCP client with the server script:
python ollama_client.py math_server.py

You should now be able to interact with the system by asking math questions, and the MCP client will route the requests to the server-exposed tools via the local Ollama LLM.

Additional notes

Tips and caveats:

  • Ensure Ollama is configured with a model that supports tool calling; otherwise, the LLM may not generate valid tool_call instructions.
  • The environment variables can be adjusted to point MCP to the local Ollama endpoint or to tweak logging verbosity.
  • If you modify the server to add more tools, update the client’s tool descriptions accordingly so the LLM has accurate tool metadata to reason with.
  • For debugging, inspect logs from both the MCP client and server to verify tool invocations and results.
  • This example runs entirely locally; ensure you have sufficient CPU/RAM for the local LLM hosting.

Related MCP Servers

Sponsor this space

Reach thousands of developers