mcphost
MCP server for ollama, openai, and anthropic bridging
claude mcp add --transport stdio jinsonwu-mcphost-server go run . \ --env PORT="Port for HTTP server (default 8115)" \ --env ENV_FILE="Path to environment file or configuration; uses .example.env as template"
How to use
MCPhost exposes an HTTP API to interact with a local LLM (via Ollama) through simple HTTP requests. In server mode, you can send requests to the configured endpoints to generate, complete, or chat with your local language model using the MCP-host bridge. The server runs by default in HTTP-based server mode and listens on port 8115 unless overridden by environment configuration. The tooling focuses on bridging local LLM calls to HTTP, enabling developers to build chat interfaces or automation around Ollama without needing direct local model management in every client.
Typical usage involves starting the server after configuring environment variables, then issuing HTTP requests to endpoints provided by the server to perform tasks such as generating text, continuing conversations, or running model-specific prompts. The setup relies on environment-based configuration and can also be wired through an mcp.json configuration or command-line flags as noted in the README. This makes it straightforward to integrate into existing MCP-based tooling or orchestration pipelines that expect an HTTP-accessible MCP service.
If you need to switch to command-line mode, you can disable server_mode in the code and rebuild to operate without the HTTP endpoints, using the internal CLI instead. The project is designed to bridge local LLM access with HTTP for convenient tooling and automation.
How to install
Prerequisites:
- Go installed (1.20+ recommended)
- Access to a local LLM via Ollama if required by your workflow
Installation steps:
-
Clone the repository: git clone <repository-url> cd mcphost
-
Copy the example environment file and configure it: cp .example.env .env
Edit .env to customize PORT, Ollama endpoint, and other settings
-
Build the server: go build
-
Run the server: ./mcphost-server
Optional: If you prefer to run via go run (not recommended for production): go run .
Environment variables you may configure in .env or via your environment:
- PORT: Port on which the HTTP server listens (default 8115)
- Other application-specific settings as defined by the project (e.g., Ollama endpoint, timeouts)
Notes:
- The README indicates server mode is enabled by default. Ensure your environment is configured for HTTP access to Ollama if required.
Additional notes
Tips and notes:
- The server defaults to port 8115 unless overridden by configuration. If you change ports, update any clients accordingly.
- Environment-based configuration is supported via .example.env/.env, mcp.json, or command-line flags. Ensure required variables (like Ollama endpoint) are correctly set to avoid runtime errors.
- If you switch to command-line mode, rebuild the project as described in the README to disable server-mode HTTP endpoints.
- Since the MCP config in this documentation uses a Go-based execution approach, ensure your environment has Go installed and that you run the built binary in an appropriate environment (permissions, PATH).
- When debugging, check for common issues such as port conflicts, missing Ollama services, or misconfigured environment variables.
Related MCP Servers
trpc-agent-go
trpc-agent-go is a powerful Go framework for building intelligent agent systems using large language models (LLMs) and tools.
station
Station is our open-source runtime that lets teams deploy agents on their own infrastructure with full control.
tiger-cli
Tiger CLI is the command-line interface for Tiger Cloud. It includes an MCP server for helping coding agents write production-level Postgres code.
gopls
MCP server for golang projects development: Expand AI Code Agent ability boundary to have a semantic understanding and determinisic information for golang projects.
kubernetes
A Model Context Protocol (MCP) server for the Kubernetes API.
gcp-cost
💰 An MCP server that enables AI assistants to estimate Google Cloud costs, powered by Cloud Billing Catalog API and built with Genkit for Go