Get the FREE Ultimate OpenClaw Setup Guide →

multi-llm-cross-check

A Model Control Protocol (MCP) server that allows cross-checking responses from multiple LLM providers simultaneously

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio lior-ps-multi-llm-cross-check-mcp-server uv --directory /multi-llm-cross-check-mcp-server run main.py \
  --env GEMINI_API_KEY="your_gemini_key" \
  --env OPENAI_API_KEY="your_openai_key" \
  --env ANTHROPIC_API_KEY="your_anthropic_key" \
  --env PERPLEXITY_API_KEY="your_perplexity_key"

How to use

This MCP server provides a cross-check capability by querying multiple large language model providers in parallel. When configured, Claude Desktop can communicate with this MCP server to send a prompt and receive responses from the configured providers such as OpenAI (ChatGPT), Anthropic (Claude), Perplexity AI, and Google Gemini. The server handles parallel requests, aggregates responses, and returns a structured dictionary mapping each provider to its respective output. To use it, enable the MCP server in Claude Desktop via the provided configuration, then invoke the cross_check tool in your conversation and supply a prompt. The server will return an object with individual results for each enabled provider, or skip a provider if its API key is not configured.

How to install

Prerequisites:

  • Python 3.8 or higher
  • uv package manager (pip install uv)
  • API keys for the LLM providers you plan to use

Installation via Smithery (automatic):

  1. Install the MCP server with Smithery:
npx -y @smithery/cli install @lior-ps/multi-llm-cross-check-mcp-server --client claude

Manual installation:

  1. Clone the repository:
git clone https://github.com/lior-ps/multi-llm-cross-check-mcp-server.git
cd multi-llm-cross-check-mcp-server
  1. Initialize a uv environment and install requirements:
uv venv
uv pip install -r requirements.txt
  1. Configure Claude Desktop to connect to the MCP server (example shown in the repo README):
  • Create claude_desktop_config.json in Claude Desktop configuration with the appropriate mcp_servers entry:
{
  "mcp_servers": [
    {
      "command": "uv",
      "args": [
        "--directory",
        "/multi-llm-cross-check-mcp-server",
        "run",
        "main.py"
      ],
      "env": {
        "OPENAI_API_KEY": "your_openai_key",
        "ANTHROPIC_API_KEY": "your_anthropic_key",
        "PERPLEXITY_API_KEY": "your_perplexity_key",
        "GEMINI_API_KEY": "your_gemini_key"
      }
    }
  ]
}

Notes:

  1. Enable only the providers for which you have API keys; missing keys will cause that provider to be skipped.
  2. You may need to specify the full path to the uv executable in the command field if necessary.

Additional notes

Tips:

  • Ensure all API keys are kept secure and not exposed in public config files.
  • If a provider is unavailable or returns an error, the MCP server will still return results from the other providers.
  • The system is designed for asynchronous parallel processing; prompts should be crafted with the understanding that response times may vary by provider.
  • You can adjust which providers are included by editing the environment variables in the mcp_config entry or the Claude Desktop configuration.

Related MCP Servers

Sponsor this space

Reach thousands of developers