Get the FREE Ultimate OpenClaw Setup Guide →

llm-council

The LLM Council works together to answer your hardest questions

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio amiable-dev-llm-council python -m llm_council.mcp_server \
  --env GOOGLE_API_KEY="Your Google API key (optional if using OpenRouter)" \
  --env OPENAI_API_KEY="Your OpenAI API key (optional if using OpenRouter)" \
  --env REQUESTY_API_KEY="Your Requesty API key (optional if using OpenRouter only)" \
  --env ANTHROPIC_API_KEY="Your Anthropic API key (optional if using OpenRouter)" \
  --env OPENROUTER_API_KEY="Your OpenRouter API key (required for default gateway OpenRouter)" \
  --env LLM_COUNCIL_CHAIRMAN="Model used for final synthesis (optional)" \
  --env LLM_COUNCIL_DEFAULT_GATEWAY="default|direct|requesty|openrouter"

How to use

LLM Council Core is a multi-LLM deliberation system. When you run the MCP server, it exposes an API-backed component that orchestrates several large language models: first, it dispatches a user question to multiple LLMs in parallel; next, each LLM reviews and ranks the others’ responses in an anonymized fashion; finally, a Chairman LLM synthesizes all responses into a single, high-quality answer. This server can be powered by gateways such as OpenRouter by default, or you can configure it to use direct provider keys or Requesty, depending on your needs. To operate the MCP server you’ll run the Python module that hosts the MCP entry point, and you can supply API keys and gateway preferences via environment variables documented in the README. The server will then handle routing questions, collecting model outputs, triaging, and producing a synthesized result for clients.

How to install

Prerequisites:

  • Python 3.11+ installed on your system
  • Internet access to fetch Python packages

Install the core library with MCP support:

pip install "llm-council-core[mcp]"

If you only need the core library without MCP server features:

pip install llm-council-core

Optionally set up a virtual environment (recommended):

python -m venv venv
source venv/bin/activate  # on Unix/macOS
venv\Scripts\activate.bat # on Windows
pip install "llm-council-core[mcp]"

Run the MCP server (example assumes the module llm_council.mcp_server provides the entry point):

# Ensure environment variables are set as needed
export OPENROUTER_API_KEY="sk-..."
export LLM_COUNCIL_DEFAULT_GATEWAY="openrouter"

python -m llm_council.mcp_server

If you are deploying via a container or package manager, adjust the command to your chosen method according to the MCP server containerization guide.

Additional notes

Tips and common considerations:

  • Always keep your API keys secure; avoid committing keys to version control. Use environment variables or a secure key management approach.
  • The default gateway is OpenRouter, but you can switch to direct provider access or Requesty by configuring LLM_COUNCIL_DEFAULT_GATEWAY and related provider keys.
  • The Chairman model can be customized via LLM_COUNCIL_CHAIRMAN to influence synthesis quality.
  • If using a gateway that has usage limits or cold-start concerns (e.g., certain free tiers), consider deploying with a robust gateway and ensuring webhook stability.
  • For production, consider YAML-based or environment-based configuration for models, triage, and gateways to mirror the advanced configuration options described in the project documentation.

Related MCP Servers

Sponsor this space

Reach thousands of developers