multi-llm-cross-check
A Model Control Protocol (MCP) server that allows cross-checking responses from multiple LLM providers simultaneously
claude mcp add --transport stdio lior-ps-multi-llm-cross-check-mcp-server uv --directory /multi-llm-cross-check-mcp-server run main.py \ --env GEMINI_API_KEY="your_gemini_key" \ --env OPENAI_API_KEY="your_openai_key" \ --env ANTHROPIC_API_KEY="your_anthropic_key" \ --env PERPLEXITY_API_KEY="your_perplexity_key"
How to use
This MCP server provides a cross-check capability by querying multiple large language model providers in parallel. When configured, Claude Desktop can communicate with this MCP server to send a prompt and receive responses from the configured providers such as OpenAI (ChatGPT), Anthropic (Claude), Perplexity AI, and Google Gemini. The server handles parallel requests, aggregates responses, and returns a structured dictionary mapping each provider to its respective output. To use it, enable the MCP server in Claude Desktop via the provided configuration, then invoke the cross_check tool in your conversation and supply a prompt. The server will return an object with individual results for each enabled provider, or skip a provider if its API key is not configured.
How to install
Prerequisites:
- Python 3.8 or higher
- uv package manager (pip install uv)
- API keys for the LLM providers you plan to use
Installation via Smithery (automatic):
- Install the MCP server with Smithery:
npx -y @smithery/cli install @lior-ps/multi-llm-cross-check-mcp-server --client claude
Manual installation:
- Clone the repository:
git clone https://github.com/lior-ps/multi-llm-cross-check-mcp-server.git
cd multi-llm-cross-check-mcp-server
- Initialize a uv environment and install requirements:
uv venv
uv pip install -r requirements.txt
- Configure Claude Desktop to connect to the MCP server (example shown in the repo README):
- Create claude_desktop_config.json in Claude Desktop configuration with the appropriate mcp_servers entry:
{
"mcp_servers": [
{
"command": "uv",
"args": [
"--directory",
"/multi-llm-cross-check-mcp-server",
"run",
"main.py"
],
"env": {
"OPENAI_API_KEY": "your_openai_key",
"ANTHROPIC_API_KEY": "your_anthropic_key",
"PERPLEXITY_API_KEY": "your_perplexity_key",
"GEMINI_API_KEY": "your_gemini_key"
}
}
]
}
Notes:
- Enable only the providers for which you have API keys; missing keys will cause that provider to be skipped.
- You may need to specify the full path to the uv executable in the command field if necessary.
Additional notes
Tips:
- Ensure all API keys are kept secure and not exposed in public config files.
- If a provider is unavailable or returns an error, the MCP server will still return results from the other providers.
- The system is designed for asynchronous parallel processing; prompts should be crafted with the understanding that response times may vary by provider.
- You can adjust which providers are included by editing the environment variables in the mcp_config entry or the Claude Desktop configuration.
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP