claude-concilium
Multi-agent AI consultation framework for Claude Code via MCP — get second opinions from OpenAI, Gemini, Qwen, DeepSeek
claude mcp add --transport stdio spyrae-claude-concilium node /absolute/path/to/servers/mcp-qwen/server.js \ --env QWEN_AUTH_TYPE="<set-your-auth-type>"
How to use
Claude Concilium is an MCP-based framework that runs parallel consultations with multiple LLM providers through individual MCP servers. This setup enables you to solicit opinions from OpenAI (via the mcp-openai server), Gemini (via the mcp-gemini server), and Qwen (via the mcp-qwen server) in parallel, then synthesize a consensus or iterate to resolve disagreements. Each server wraps a CLI tool (codex for OpenAI, gemini CLI for Gemini, and qwen CLI for Qwen) and can operate standalone or as part of a larger Concilium configuration. To use, configure your .mcp.json (or your preferred MCP config) to point to the appropriate server.js entry points, then run the MCP orchestrator to query all providers simultaneously and receive a consolidated result.
You can customize which providers participate by editing the mcp.json to enable or disable specific servers. The tools exposed by each server include: mcp-openai provides openai_chat and openai_review capabilities, mcp-gemini exposes gemini_chat and gemini_analyze, and mcp-qwen offers qwen_chat. The system handles quota, authentication, and fallbacks (e.g., OpenAI quota exceeded may fall back to Qwen or DeepSeek according to your configured chain). When running in Docker, you can also mount authentication credentials as needed and pass the SERVER environment variable to select the target server.
How to install
Prerequisites:
- Node.js 20+ installed on your system
- Git installed
- Access to the required CLI tools (OpenAI Codex, Gemini CLI, Qwen CLI) and any authentication prerequisites
Step-by-step installation:
-
Clone the repository
- git clone https://github.com/spyrae/claude-concilium.git
- cd claude-concilium
-
Install dependencies for each server
- cd servers/mcp-openai && npm install && cd ../..
- cd servers/mcp-gemini && npm install && cd ../..
- cd servers/mcp-qwen && npm install && cd ../..
-
(Optional) verify smoke tests locally
- node test/smoke-test.mjs
-
Configure MCP
- Create or edit your .mcp.json (or use config/mcp.json.example as a starting point) to reference:
- mcp-openai: path to /servers/mcp-openai/server.js
- mcp-gemini: path to /servers/mcp-gemini/server.js
- mcp-qwen: path to /servers/mcp-qwen/server.js
- Create or edit your .mcp.json (or use config/mcp.json.example as a starting point) to reference:
-
Run the MCP setup
- Ensure Node.js environment is set and run your MCP orchestrator (as per your setup) to start querying all configured servers.
Additional notes
Tips and best practices:
- Each server wraps a local CLI tool; ensure those tools are installed and authenticated (e.g., Codex OAuth for OpenAI, Gemini OAuth, and Qwen credentials).
- For OpenAI, CODEX_HOME may need to point to your local credentials directory; mount or set this path accordingly when running via Docker.
- If you encounter QUOTA_EXCEEDED or AUTH_EXPIRED errors, rely on the configured fallback chain (e.g., Qwen or DeepSeek) and consider refreshing tokens.
- When running in Docker, you can pass SERVER=<server-name> to select which server to run and mount authentication credentials as shown in the README.
- The technicians for each server can be extended or swapped by editing the corresponding server implementation without changing the overall MCP orchestration.
Related MCP Servers
everything-claude-code
The agent harness performance optimization system. Skills, instincts, memory, security, and research-first development for Claude Code, Codex, Cowork, and beyond.
sandboxed.sh
Self-hosted orchestrator for AI autonomous agents. Run Claude Code & Open Code in isolated linux workspaces. Manage your skills, configs and encrypted secrets with a git repo.
ncp
Natural Context Provider (NCP). Your MCPs, supercharged. Find any tool instantly, load on demand, run on schedule, ready for any client. Smart loading saves tokens and energy.
langfuse
A Model Context Protocol (MCP) server for Langfuse, enabling AI agents to query Langfuse trace data for enhanced debugging and observability
HydraMCP
Connect agents to agents. MCP server for querying any LLM through your existing subscriptions: compare, vote, and synthesize across GPT, Gemini, Claude, and local models from one terminal.
mode-manager
MCP Memory Agent Server - A VS Code chatmode and instruction manager with library integration