Get the FREE Ultimate OpenClaw Setup Guide →

obot

Complete MCP Platform -- Hosting, Registry, Gateway, and Chat Client

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio obot-platform-obot docker run -d --name obot -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock -e OPENAI_API_KEY=<API KEY> -e ANTHROPIC_API_KEY=<ANTHROPIC_API_KEY> ghcr.io/obot-platform/obot:latest \
  --env OPENAI_API_KEY="Your OpenAI API key" \
  --env ANTHROPIC_API_KEY="Your Anthropic API key (optional)"

How to use

Obot is an open-source platform that lets you host MCP servers, publish them to a central registry, and access them through a unified MCP gateway and chat client. With Obot running, you can deploy MCP servers locally or in a managed environment via Docker, Kubernetes, or other supported runtimes, and then manage access, authentication, and auditing from the built-in admin UI. The Obot Chat component provides a standardized chat interface that supports leading model providers (e.g., OpenAI, Anthropic) and can leverage domain-specific information through RAG, project-wide memory, and reusable configurations. Organizations can discover and use MCP servers via the registry, while IT admins control deployment rights, catalog publishing, and sharing policies.

To start using Obot, open the UI at http://localhost:8080 once the container is running. From there you can register MCP servers (Node.js, Python, or container-based), publish them to the catalog, and configure access rules. The MCP gateway provides a single entry point to MCP servers with request filtering, logging, and usage visibility, helping you monitor which servers are being used and by whom. The platform also supports integrating with workflow tools and agents (e.g., n8n, LangGraph) to automate interactions with MCP servers, and it enables clients like ChatGPT or Claude Desktop to connect to your hosted MCP ecosystem through the gateway.

How to install

Prerequisites:

  • Docker installed on your host machine
  • Access to the internet to pull the Obot image
  • API keys for model providers (e.g., OpenAI, Anthropic) if you plan to use external models
  1. Install Docker
  1. Run the Obot container
  • Example (adjust keys as needed):
docker run -d --name obot -p 8080:8080 \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -e OPENAI_API_KEY=<API KEY> \
  ghcr.io/obot-platform/obot:latest
  1. Access the Obot UI
  1. Configure keys and providers
  • In the Obot admin UI, set your OpenAI or Anthropic keys and configure model providers as needed.
  • Optionally set up Anthropic_API_KEY or other provider credentials as required by your deployment.
  1. Optional: advanced deployment
  • For Kubernetes or other orchestrators, follow the Installation Guide linked in the Obot documentation for alternative deployment methods.

Additional notes

Tips and common considerations:

  • If you use a private registry, ensure authentication is configured in the Obot admin UI and that the registry supports MCP server metadata standards.
  • Remember to expose port 8080 only to trusted networks or behind a reverse proxy with TLS termination for production deployments.
  • Monitor the OAuth 2.1 and token handling configurations to maintain secure access control across users and groups.
  • When using multiple model providers, verify rate limits and pricing to avoid unexpected costs.
  • The MCP gateway provides logging and request inspection; enable appropriate logging verbosity in production to aid debugging and auditing.
  • Ensure the host has sufficient CPU/memory resources to run MCP servers and the Obot gateway concurrently, especially if you deploy multi-user HTTP servers.

Related MCP Servers

Sponsor this space

Reach thousands of developers