brainstorm
MCP server for multi-round AI brainstorming debates between multiple models (GPT, DeepSeek, Groq, Ollama, etc.)
claude mcp add --transport stdio spranab-brainstorm-mcp npx -y brainstorm-mcp \ --env GROQ_API_KEY="gsk_..." \ --env GEMINI_API_KEY="AIza..." \ --env OPENAI_API_KEY="sk-..." \ --env DEEPSEEK_API_KEY="sk-..."
How to use
brainstorm-mcp runs multi-round debates between multiple AI models, with Claude as an active participant in every round. It orchestrates concurrent model responses, enforces per-model timeouts, and provides a structured synthesis at the end. You interact with the system by initiating brainstorm sessions via the brainstorm command, optionally selecting specific providers and models, and by feeding Claude its response through brainstorm_respond during interactive sessions. The tool supports a variety of providers (OpenAI, Gemini, DeepSeek, Groq, and local Ollama models) and can be configured with environment variables or a JSON config file to specify models, API keys, and base URLs. Claude participates across rounds, reads the other models’ outputs, and contributes its own perspective before the next round so you get diverse viewpoints and a consolidated synthesis.
How to install
Prerequisites:
- Node.js and npm installed on your system
- Access keys for the supported providers (OpenAI, Gemini, DeepSeek, Groq) if you plan to use their APIs
- Optional: Ollama or other local models if you want to run local providers
Installation steps:
- Clone the repository (or install the MCP package globally):
git clone https://github.com/spranab/brainstorm-mcp.git
cd brainstorm-mcp
npm install
- Build (if required by the project) and start the MCP server:
npm run build
npm start
- Run via npx (as shown in the Quick Start):
npx -y brainstorm-mcp
- Configure your client (Claude Code or Claude Desktop) to point to the brainstorm server name and provide the necessary API keys via environment variables or a config file as described in the documentation.
Additional notes
Tips and common considerations:
- Ensure your API keys for OpenAI, Gemini, DeepSeek, Groq are correctly exported in your environment or configured in the JSON config file prior to starting the server.
- Use the deploy-ready mcp configuration by placing the same environment variables in your runtime environment to avoid missing credentials.
- The system supports per-model timeouts (2 minutes per API call by default); if a model is slow, others will continue, and the synthesizer will produce the final output.
- Interactive sessions enable Claude to participate in every round; if you want external models only, you can run a non-interactive session by setting participate=false.
- Synthesize using the designated synthesizer model after all rounds; you can override this in your configuration if needed.
- If you run into baseURL issues with local models (Ollama) or provider autodetection, double-check the provider section in your config file and ensure the corresponding services are up and reachable.
Related MCP Servers
Remote
A type-safe solution to remote MCP communication, enabling effortless integration for centralized management of Model Context.
openapi
OpenAPI definitions, converters and LLM function calling schema composer.
boilerplate
TypeScript Model Context Protocol (MCP) server boilerplate providing IP lookup tools/resources. Includes CLI support and extensible structure for connecting AI systems (LLMs) to external data sources like ip-api.com. Ideal template for creating new MCP integrations via Node.js.
mcp-tasks
A comprehensive and efficient MCP server for task management with multi-format support (Markdown, JSON, YAML)
asterisk
Asterisk Model Context Protocol (MCP) server.
vscode-context
MCP Server to Connect with VS Code IDE