ai-council
Multi-AI consensus MCP server that queries multiple AI models (OpenAI, Claude, Gemini, custom APIs) in parallel and synthesizes responses to reduce bias and improve accuracy. A Python implementation of the wisdom-of-crowds approach for AI decision making.
claude mcp add --transport stdio 0xakuti-ai-council-mcp uvx ai-council \ --env OPENROUTER_API_KEY="..."
How to use
AI Council MCP Server interlinks multiple AI models to provide a robust, consensus-driven answer. It runs several models in parallel, trails anonymous code names to prevent synthesis bias, and uses a synthesizer model to produce a single, comprehensive response drawn from all model inputs. By default, it leverages OpenRouter with Claude Sonnet, Gemini, and DeepSeek, but it can also integrate other compliant OpenAI-compatible endpoints or custom APIs. To use it, configure your MCP client (Cursor IDE, Claude Desktop, or any MCP client) to point at the ai-council MCP server and supply the appropriate API keys for the models you want engaged. The server is designed to gracefully degrade if one or more models fail, ensuring you still receive a best-effort answer from the available responses.
How to install
Prerequisites:\n- Python 3.10+ installed on your system.\n- uv (The Uvicorn-like runtime for Python) installed.\n\nInstallation steps:\n1) Install uv if not already installed (example for Unix-like systems):\n pip install uv\n\n2) Install the AI Council package (via uvx or pipx as recommended in the README):\n - Using uvx (preferred for uv-based deployment):\n Ensure you have uvx installed, then install/run the server via:\n n/a as per your environment; typically: uvx ai-council\n - Using pipx (recommended for isolated environments):\n pipx run ai-council\n\n3) Alternatively, install via pip (manual install):\n pip install ai-council\n Then run with the appropriate command (as configured in your MCP client):\n ai-council\n\n4) Validate installation by starting the MCP server and verifying it accepts connections from your MCP client.\n\nNotes:\n- If you use pipx, you can update your MCP configuration to call: {"command": "pipx", "args": ["run", "ai-council"]}.\n- For a local uv deployment, ensure you follow the config.yaml or environment variable setup to provide API keys for the models you want to query.
Additional notes
Tips and common issues:\n- Environment variables: Ensure OPENROUTER_API_KEY (and any other model keys you rely on) are set in your MCP config.\n- Parallelism: The default max_models is 3; adjust with CLI arguments such as --max-models to tailor latency and cost.\n- Synthesis model: The synthesizer selects among anonymous responses; you can influence behavior with synthesis_model_selection in a config.yaml if you adopt the advanced configuration.\n- Model availability: If a model is down or unreachable, AI Council will gracefully degrade and still produce a synthesized answer from the remaining models.\n- OpenRouter compatibility: The server is designed to work with OpenRouter and other OpenAI-compatible APIs; ensure your API keys and endpoints are correctly configured.\n- Logging: Increase log level via --log-level (DEBUG/INFO/WARNING/ERROR) to diagnose issues during setup or runtime.
Related MCP Servers
ai-guide
程序员鱼皮的 AI 资源大全 + Vibe Coding 零基础教程,分享大模型选择指南(DeepSeek / GPT / Gemini / Claude)、最新 AI 资讯、Prompt 提示词大全、AI 知识百科(RAG / MCP / A2A)、AI 编程教程、AI 工具用法(Cursor / Claude Code / OpenClaw / TRAE / Lovable / Agent Skills)、AI 开发框架教程(Spring AI / LangChain)、AI 产品变现指南,帮你快速掌握 AI 技术,走在时代前沿。本项目为开源文档版本,已升级为鱼皮 AI 导航网站
Everywhere
Context-aware AI assistant for your desktop. Ready to respond intelligently, seamlessly integrating multiple LLMs and MCP tools.
archestra
Secure cloud-native MCP registry, gateway & orchestrator
papersgpt-for-zotero
A powerful Zotero AI and MCP plugin with ChatGPT, Gemini 3.1, Claude, Grok, DeepSeek, OpenRouter, Kimi 2.5, GLM 5, SiliconFlow, GPT-oss, Gemma 3, Qwen 3.5
py-gpt
Desktop AI Assistant powered by GPT-5, GPT-4, o1, o3, Gemini, Claude, Ollama, DeepSeek, Perplexity, Grok, Bielik, chat, vision, voice, RAG, image and video generation, agents, tools, MCP, plugins, speech synthesis and recognition, web search, memory, presets, assistants,and more. Linux, Windows, Mac
Unity
AI-powered bridge connecting LLMs and advanced AI agents to the Unity Editor via the Model Context Protocol (MCP). Chat with AI to generate code, debug errors, and automate game development tasks directly within your project.