kai
An MCP Server for Kubernetes
claude mcp add --transport stdio basebandit-kai kai
How to use
Kai is a Kubernetes MCP server that acts as a bridge between large language models (LLMs) and your Kubernetes cluster. It exposes a set of natural language tools to manage core Kubernetes resources such as pods, deployments, jobs, cron jobs, services, ingresses, config maps, secrets, and namespaces. It also provides context management for switching and listing contexts, and utilities like port forwarding to pods and services. With Kai, you can issue natural language commands and have Kai translate them into Kubernetes API operations, returning structured results that your LLM can reason about or present to you directly. The server connects to your current kubectl context by default, so you can leverage your existing kubeconfig and cluster access credentials.
To use Kai, install the binary, run it, and configure your MCP consumer (Claude Desktop, Cursor, Continue, or web clients) to point at the Kai command. You can operate in standard I/O mode or SSE mode for web-based clients. Typical tasks include listing resources, creating resources (pods, deployments, services, cron jobs), retrieving details, updating configurations, and streaming logs from pods. Kai also supports common Kubernetes patterns like port-forwarding to enable local access to services running inside the cluster.
How to install
Prerequisites:
- A Kubernetes cluster accessible via kubectl (kubectl configured with a kubeconfig that can reach your cluster).
- Go installed on your machine (to build or install Kai from source, if needed).
- Optional: access tokens or certificates if your cluster requires them.
Installation steps (recommended):
- Install Kai from the Go package repository:
go install github.com/basebandit/kai/cmd/kai@latest
- Ensure the kai binary is in your PATH. You can verify installation with:
kai --version
- Start Kai (default transport is stdio; for web clients you can use SSE):
kai
- Configure your MCP client ( Claude Desktop, Cursor, Continue, or web client) to point to the Kai command as described in the README under Configuration. For example, set the command to /path/to/kai in your Claude Desktop or Cursor settings.
Additional notes
Tips and notes:
- Kai relies on your existing kubectl context. If you need to target a different cluster, use a kubeconfig path or context via Kai's options (as documented in the README).
- Transport modes: use stdio for local CLI-style interactions or -transport=sse for web-based clients that connect over SSE to http://localhost:8080/sse.
- Default kubeconfig path is ~/.kube/config, but you can override it with -kubeconfig and -context when launching Kai.
- Logs are emitted in structured JSON by default, which makes them easy to parse in your tooling.
- Not all Kubernetes resources are currently exposed in Kai (per the Features list in the README). The core workloads (Pods, Deployments, Jobs, CronJobs), networking (Services, Ingress), configuration (ConfigMaps, Secrets, Namespaces), context management, and port forwarding are supported. If you rely on cluster health metrics or CRDs, you may need to monitor availability or extend Kai accordingly.
Related MCP Servers
yokai
Simple, modular, and observable Go framework for backend applications.
jetski
Authentication, analytics, and prompt visibility for MCP servers with zero code changes. Supports OAuth2.1, DCR, real-time logs, and client onboarding out of the box
k8s
Manage Your Kubernetes Cluster with k8s mcp-server
sandbox
A Model Context Protocol (MCP) server that enables LLMs to run ANY code safely in isolated Docker containers.
github-brain
An experimental GitHub MCP server with local database.
mcp-tts
MCP Server for Text to Speech