zen
Zen MCP server with advanced AI collaboration features
claude mcp add --transport stdio jray2123-zen-mcp-server npx -y zen-mcp-server
How to use
Zen MCP is a model-context orchestration server that exposes multiple AI models and tools in a single conversational workflow. It enables you to run a complex, multi-model development pipeline where Claude, Gemini, O3, and other models can be invoked for specialized subtasks such as code analysis, planning, debugging, code reviews, and pre-commit validation. The server coordinates tool-enabled workflows like chat for collaborative thinking, thinkdeep for extended reasoning, planner for interactive step-by-step planning, codereview for professional code assessments, precommit for validation, debug for expert debugging, analyze for smart file analysis, and refactor for intelligent code restructuring. Context carries across steps so model perspectives remain aligned as tasks progress across sub-workflows. You can instantiate Zen MCP and then drive conversations that automatically chain between models and tools as needed for each subtask. useful workflows include initiating a codereview, triggering a multi-model planning phase, and continuing via precommit validation to ensure changes stay within quality gates.
How to install
Prerequisites
- Node.js (recommended latest LTS) and npm installed on your system
- Basic familiarity with running CLI commands
Install steps
- Install Node.js and npm if not already installed
- macOS/Linux (nix): curl -fsSL https://deb.nodesource.com/setup_lts.x | bash - && apt-get install -y nodejs
- Windows: download and install from https://nodejs.org/
- Install the Zen MCP server package and run via npx
- Open your terminal and run: npm -v node -v npx -y zen-mcp-server
- Optional: run via Docker (alternative method)
- docker pull zen-mcp-server:latest
- docker run -i zen-mcp-server:latest
- Verify installation
- The command should start the Zen MCP server and expose its API/CLI endpoints as documented in the repository README.
- Environment configuration (optional)
- If provided, create a config file or environment variables as described in the project docs to tailor model endpoints, timeouts, and available tools.
Additional notes
Tips and common considerations:
- If you plan to run multiple workflows concurrently, ensure your server has enough CPU/RAM to handle model invocations and intermediate context storage.
- Tools like planner, analyze, and codereview rely on underlying model endpoints; ensure API keys or access tokens are configured if your deployment uses protected endpoints.
- Use the 1-to-1 mapping of tool names to their capabilities as shown in the Tools Reference section of the README to compose robust workflows.
- For debugging, start with a minimal workflow (e.g., chat + analyze) to verify basic orchestration before adding additional tools like refactor or precommit.
- If you encounter model-timeouts or rate limits, consider staggering model calls or increasing timeouts in your environment configuration.
- Keep an eye on conversation context continuity across steps to prevent prompt drift; the Zen MCP aims to preserve context across the multi-model workflow.
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP