llm-bridge
A model-agnostic Message Control Protocol (MCP) server that enables seamless integration with various Large Language Models (LLMs) like GPT, DeepSeek, Claude, and more.
claude mcp add --transport stdio sjquant-llm-bridge-mcp uvx llm-bridge-mcp \ --env GOOGLE_API_KEY="your_google_api_key" \ --env OPENAI_API_KEY="your_openai_api_key" \ --env DEEPSEEK_API_KEY="your_deepseek_api_key" \ --env ANTHROPIC_API_KEY="your_anthropic_api_key"
How to use
LLM Bridge MCP provides a unified interface to access multiple large language model providers through the MCP protocol. It exposes a single entry point to route prompts to different backends such as OpenAI, Anthropic, Google Gemini, and DeepSeek, while allowing you to configure model names, temperature, and token limits per request. The server is built with type safety in mind using Pydantic AI, ensuring robust validation of inputs and responses. You can leverage the run_llm tool to send a prompt to a specific model, optionally supply a system prompt to guide the model’s behavior, and receive a structured response with the model’s output. This makes it easy to experiment with different providers and models within the same workflow.
How to install
Prerequisites:
- Node.js and npm (for Smithery-based installation, optional if using uvx-based workflow)
- uv (for Python-based execution) installed on the system
- Access to your API keys for the supported LLM providers
Installation steps:
-
Installation via Smithery (automatic): npx -y @smithery/cli install @sjquant/llm-bridge-mcp --client claude
-
Manual installation (uvx-based runtime): a. Install uv if not already installed:
- macOS: brew install uv
- Linux: curl -LsSf https://astral.sh/uv/install.sh | sh
- Windows: powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" b. Clone the repository and set up: git clone https://github.com/yourusername/llm-bridge-mcp.git cd llm-bridge-mcp c. Ensure environment variables are set in a .env file or your environment, e.g. OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY, DEEPSEEK_API_KEY.
Notes:
- If using Smithery, the server will be installed and managed via the Smithery CLI.
- If running manually, ensure the llm-bridge-mcp package is available in your PATH as llm-bridge-mcp and that uvx points to the appropriate executable for your environment.
Additional notes
Tips and common issues:
- Ensure your API keys are set as environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY, DEEPSEEK_API_KEY) before starting the MCP server.
- If you encounter spawn ENOENT errors, verify that the uvx executable is in your PATH or use the full path to uvx in the MCP configuration.
- The default model configuration uses model_name values like "openai:gpt-4o-mini"; you can customize per request via the run_llm interface if supported by your MCP setup.
- You can document or extend the supported providers as long as their APIs are accessible via the uvx-based bridge and the MCP interface.
- When using multiple providers, consider setting per-model temperature and max_tokens to optimize cost and latency for your use case.
Related MCP Servers
MCP-Bridge
A middleware to provide an openAI compatible endpoint that can call MCP tools
lc2mcp
Convert LangChain tools to FastMCP tools
boilerplate
TypeScript Model Context Protocol (MCP) server boilerplate providing IP lookup tools/resources. Includes CLI support and extensible structure for connecting AI systems (LLMs) to external data sources like ip-api.com. Ideal template for creating new MCP integrations via Node.js.
asterisk
Asterisk Model Context Protocol (MCP) server.
rednote-analyzer
MCP server that lets AI assistants search, analyze, and generate Xiaohongshu (小红书) content with real-time data via browser automation
elenchus
Elenchus MCP Server - Adversarial verification system for code review