Get the FREE Ultimate OpenClaw Setup Guide →

agentmesh

🤖🕸️ Production-grade multi-agent orchestration framework powered by Pregel BSP. Build sophisticated AI workflows with parallel execution, state management, and observability.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio hupe1980-agentmesh docker run -i ghcr.io/hupe1980/agentmesh:latest \
  --env OPENAI_API_KEY="your-openai-api-key" \
  --env BEDROCK_API_KEY="your-bedrock-api-key" \
  --env PINECONE_API_KEY="your-pinecone-api-key" \
  --env ANTHROPIC_API_KEY="your-anthropic-api-key"

How to use

AgentMesh is a production-grade multi-agent orchestration framework that leverages Pregel-style bulk-synchronous parallel (BSP) graph processing to coordinate AI agents and tools at scale. It integrates with a variety of LLM providers, supports memory and embedding storage, and enables dynamic tool discovery via MCP servers. With AgentMesh, you can compose complex agent workflows that perform parallel reasoning, tool calls, memory introspection, and model-based planning, while benefiting from observability features such as metrics and tracing. To use it, run the provided MCP server (in this configuration via Docker) and connect your clients or orchestration layer to the MCP endpoints. The server exposes mechanisms to register tools, define agents, and orchestrate multi-step workflows across BSP vertices, enabling scalable execution and robust state management.

How to install

Prerequisites:

  • Docker installed on your machine (Docker Desktop or containerd).
  • Optional: access keys for your preferred LLM providers (e.g., OpenAI, Anthropic, Gemini, Bedrock).

Installation steps:

  1. Pull and run the AgentMesh MCP server via Docker (as defined in the mcp_config):
# If you haven't already, log in to your container registry if needed
# docker login ghcr.io

# Start the MCP server (per the configuration in this document)
# This will pull the latest image and run it in interactive mode
  1. Verify the container is running and healthy:
docker ps -a | grep agentmesh
  1. (Optional) Build from source for local development:
# Requires Go toolchain
go install github.com/hupe1980/agentmesh@latest
# Or clone and build from source
git clone https://github.com/hupe1980/agentmesh.git
cd agentmesh
go build ./...
./agentmesh --help
  1. Configure environment variables as needed (example shown in mcp_config):
export OPENAI_API_KEY=your-key
export ANTHROPIC_API_KEY=your-key
export BEDROCK_API_KEY=your-key
  1. If you prefer non-Docker deployment, you can build and run the binary locally following the project’s usual Go build steps and run the executable directly. Ensure the same environment variables are provided.

Additional notes

Tips and common considerations:

  • Ensure your OpenAI/Anthropic/GEMINI/etc. API keys are kept secure and not committed to version control.
  • When using Kubernetes or cloud runtimes, consider mounting config as environment variables or using a secrets manager.
  • If you run into connectivity issues with LLM providers, check network egress rules and per-provider rate limits.
  • AgentMesh supports multiple MCP tool discovery modes; enable dynamic tool loading if you plan to integrate tools at runtime.
  • For observability, enable OpenTelemetry tracing in your deployment to monitor BSP vertex execution and tool calls.
  • Adjust tool iteration limits and timeout policies according to workload characteristics to balance latency and throughput.

Related MCP Servers

Sponsor this space

Reach thousands of developers ↗