Get the FREE Ultimate OpenClaw Setup Guide →

pal

The power of Claude Code / GeminiCLI / CodexCLI + [Gemini / OpenAI / OpenRouter / Azure / Grok / Ollama / Custom Model / All Of The Above] working as one.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio beehiveinnovations-pal-mcp-server node path/to/server.js

How to use

PAL MCP provides a multi-model orchestration layer that connects your CLI workflow to a diverse array of AI models and tools. It enables conversation threading across models, context isolation for sub-tasks, and role-based subagents (such as planners or code reviewers) to tackle complex development workflows. With PAL, you can orchestrate external CLIs like Claude Code, Gemini CLI, Codex CLI, and other tools directly from your main session, while maintaining a unified conversation history and final results. The built-in clink capability lets you bridge external CLIs into your workflow, spawn subagents for isolated tasks, and run multi-model debates to surface deeper insights before implementing changes.

To use PAL MCP, start the server and connect via your CLI or orchestration client. Then invoke bundled workflows (e.g., multi-model code reviews, automated planning, and pre-commit checks) that route tasks to the most suitable models. The system supports context revival and cross-model collaboration, enabling you to gain multiple perspectives on code analysis, design decisions, and debugging, all while preserving your primary workspace and control over the process.

How to install

Prerequisites:

  • Node.js (recommended) or a compatible runtime for your MCP server
  • Access to the repository containing the PAL MCP server code
  • Basic familiarity with running CLI tools and environment variables

Installation steps:

  1. Clone the repository: git clone https://github.com/beehiveinnovations-pal-mcp-server.git cd beehiveinnovations-pal-mcp-server

  2. Install dependencies (if applicable): npm install

    or using yarn

    yarn install

  3. Build or prepare the server (if a build step exists): npm run build

    or the project-specific build command

  4. Create or edit the configuration to point to your server entry (path/to/server.js in this example): Ensure the mcp_config aligns with your environment (see the mcp_config section).

  5. Run the MCP server: npm start

    or node path/to/server.js depending on setup

  6. Verify the server is running by checking logs or hitting a health endpoint if provided.

Notes:

  • If your deployment uses a different runtime (e.g., uvx/python, docker), adjust the command and arguments accordingly.
  • Ensure required environment variables are set for model access (APIs, tokens, endpoints).

Additional notes

Tips and caveats:

  • If you rely on external CLIs, ensure those CLIs are installed and accessible in the environment where PAL MCP runs. The clink integration expects the external tools to be invocable from the host.
  • Use context isolation for long-running analyses to prevent cross-task pollution. Leverage role agents like planner or codereviewer to structure complex workflows.
  • For large prompts or multi-model debates, take advantage of the automatic model coordination features to gather diverse insights and then consolidate results before implementing changes.
  • Monitor resource usage (CPU/memory) when running multiple subagents or large models in parallel, and consider throttling or sequential execution for heavy tasks.
  • Store sensitive API keys and tokens in environment variables or a secret manager rather than hard-coding them in configuration files.

Related MCP Servers

Sponsor this space

Reach thousands of developers