multi-agent-debate
Model Context Protocol Server agent debates
claude mcp add --transport stdio albinjal-multi-agent-debate-mcp npx -y multi-agent-debate-mcp
How to use
The Multi-Agent Debate MCP Server enables structured debates among multiple AI personas (for example, pro, con, and judge). It provides a round-based workflow where agents can register, present arguments, perform rebuttals, and culminate in a judgment phase. The system tracks verdicts and rationales across rounds, allowing complex multi-perspective discussions and educational simulations. Use this server to orchestrate debates with clearly identified agents, configurable rounds, and automated verdicts that help illustrate strengths and weaknesses in arguments.
To interact with the tool, provide inputs such as agentId, round, action, and content. For example, register agents like pro, con, and judge; then proceed with argue and rebut actions for each round; finally invoke judge with a verdict and rationale. The server supports additional parameters like targetAgentId for rebuttals and needsMoreRounds to signal whether further debate rounds are required. The output is designed to be colorized in the terminal for readability and to support multi-round and multi-agent configurations beyond just two sides.
How to install
Prerequisites:
- Node.js and npm installed on your system (Node 14+ recommended)
- Access to the internet to fetch the MCP package (via npm or npx)
Install and run using npx (recommended):
- Ensure Node.js and npm are installed
- Run the MCP server using npx: npx -y multi-agent-debate-mcp
Optionally, if you want to explicitly install globally first: 3. Install the package globally (optional): npm install -g multi-agent-debate-mcp 4. Start the server from your project or command line: multi-agent-debate-mcp
Alternative: Docker (if you prefer containerized deployment):
- Ensure Docker is installed and running
- Run the MCP server with the provided image: docker run --rm -i ghcr.io/albinjal/multi-agent-debate-mcp:latest
Notes:
- The README configuration shows both npx and Docker options; choose the method that fits your environment.
- If you run behind a proxy, ensure npm and Docker have proxy settings configured.
- No special environment variables are required by default, but you can add them under the env section of your deployment configuration if the MCP adds runtime options later.
Additional notes
Tips and common issues:
- Ensure unique agent IDs (e.g., pro, con, judge or other identifiers) to avoid confusion during debates.
- Configure multiple rounds and use the needsMoreRounds flag to control the flow of the debate.
- For reproducibility, log the debate transcripts and verdict rationales for each round.
- If using Docker, pull the latest image and verify network accessibility if the MCP communicates with external AI services.
- When integrating with external AI agents, ensure proper content handling to manage long arguments and rebuttals.
- The system outputs colorized console text; if integrating with non-terminal interfaces, you may want to capture plain text versions of outputs.
- If you need to customize agent roles or add more personas, extend the agent IDs accordingly within your orchestration logic.
Related MCP Servers
any-chat-completions
MCP Server for using any LLM as a Tool
bitbucket
Bitbucket MCP - A Model Context Protocol (MCP) server for integrating with Bitbucket Cloud and Server APIs
time
⏰ Time MCP Server: Giving LLMs Time Awareness Capabilities
unity-editor
An MCP server and client for LLMs to interact with Unity Projects
website-publisher
AI Website builder and publisher MCP. Quickly publish and deploy your AI generated code as real website URL. Support html, css, js, python etc.
xgmem
Global Memory MCP server, that manage all projects data.