Get the FREE Ultimate OpenClaw Setup Guide →

claude-concilium

Multi-agent AI consultation framework for Claude Code via MCP — get second opinions from OpenAI, Gemini, Qwen, DeepSeek

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio spyrae-claude-concilium node /absolute/path/to/servers/mcp-qwen/server.js \
  --env QWEN_AUTH_TYPE="<set-your-auth-type>"

How to use

Claude Concilium is an MCP-based framework that runs parallel consultations with multiple LLM providers through individual MCP servers. This setup enables you to solicit opinions from OpenAI (via the mcp-openai server), Gemini (via the mcp-gemini server), and Qwen (via the mcp-qwen server) in parallel, then synthesize a consensus or iterate to resolve disagreements. Each server wraps a CLI tool (codex for OpenAI, gemini CLI for Gemini, and qwen CLI for Qwen) and can operate standalone or as part of a larger Concilium configuration. To use, configure your .mcp.json (or your preferred MCP config) to point to the appropriate server.js entry points, then run the MCP orchestrator to query all providers simultaneously and receive a consolidated result.

You can customize which providers participate by editing the mcp.json to enable or disable specific servers. The tools exposed by each server include: mcp-openai provides openai_chat and openai_review capabilities, mcp-gemini exposes gemini_chat and gemini_analyze, and mcp-qwen offers qwen_chat. The system handles quota, authentication, and fallbacks (e.g., OpenAI quota exceeded may fall back to Qwen or DeepSeek according to your configured chain). When running in Docker, you can also mount authentication credentials as needed and pass the SERVER environment variable to select the target server.

How to install

Prerequisites:

  • Node.js 20+ installed on your system
  • Git installed
  • Access to the required CLI tools (OpenAI Codex, Gemini CLI, Qwen CLI) and any authentication prerequisites

Step-by-step installation:

  1. Clone the repository

  2. Install dependencies for each server

    • cd servers/mcp-openai && npm install && cd ../..
    • cd servers/mcp-gemini && npm install && cd ../..
    • cd servers/mcp-qwen && npm install && cd ../..
  3. (Optional) verify smoke tests locally

    • node test/smoke-test.mjs
  4. Configure MCP

    • Create or edit your .mcp.json (or use config/mcp.json.example as a starting point) to reference:
      • mcp-openai: path to /servers/mcp-openai/server.js
      • mcp-gemini: path to /servers/mcp-gemini/server.js
      • mcp-qwen: path to /servers/mcp-qwen/server.js
  5. Run the MCP setup

    • Ensure Node.js environment is set and run your MCP orchestrator (as per your setup) to start querying all configured servers.

Additional notes

Tips and best practices:

  • Each server wraps a local CLI tool; ensure those tools are installed and authenticated (e.g., Codex OAuth for OpenAI, Gemini OAuth, and Qwen credentials).
  • For OpenAI, CODEX_HOME may need to point to your local credentials directory; mount or set this path accordingly when running via Docker.
  • If you encounter QUOTA_EXCEEDED or AUTH_EXPIRED errors, rely on the configured fallback chain (e.g., Qwen or DeepSeek) and consider refreshing tokens.
  • When running in Docker, you can pass SERVER=<server-name> to select which server to run and mount authentication credentials as shown in the README.
  • The technicians for each server can be extended or swapped by editing the corresponding server implementation without changing the overall MCP orchestration.

Related MCP Servers

Sponsor this space

Reach thousands of developers