k8s-pilot
Kubernetes Control Plane Server for Managing Multiple Clusters – the central pilot for your k8s fleets✈️✈️
claude mcp add --transport stdio bourbonkk-k8s-pilot uv run --with mcp[cli] mcp run k8s_pilot.py
How to use
k8s-pilot is a centralized control plane for managing multiple Kubernetes clusters. It provides multi-cluster context switching, CRUD operations for common Kubernetes resources, and a readonly mode for safe inspection. Powered by the MCP framework, it enables Claude AI integrations and other MCP-enabled tools to interact with your Kubernetes fleets through a unified API surface. You can perform typical resource actions (create, read, update, delete) across deployments, services, configmaps, secrets, namespaces, and more, while optionally restricting write capabilities with the readonly mode for safer exploration.
To use it, install the uv package manager and run the MCP server using the included k8s_pilot.py script. In normal mode you’ll have full read/write access to your clusters; in readonly mode you can only list and get resources. Claude Desktop users can launch the server via MCP integration by pointing uv to the repository path and running the appropriate command sequence, or by using the provided k8s_pilot.py entry point with the --readonly flag for safe inspection.
How to install
Prerequisites:\n- Python 3.13 or higher\n- uv package manager installed (see installation steps)\n- Access to one or more Kubernetes clusters (kubeconfig available at ~/.kube/config or in-cluster config)\n\nInstallation steps:\n1) Clone the repository and navigate into it:\n\nbash\ngit clone https://github.com/bourbonkk/k8s-pilot.git\ncd k8s-pilot\n\n2) Ensure uv is installed:\n\n- macOS:\nbash\nbrew install uv\n\n- Linux:\nbash\ncurl -LsSf https://astral.sh/uv/install.sh | sh\n\n3) Run the MCP server with uv and the k8s_pilot script:\n\nbash\nuv run --with mcp[cli] mcp run k8s_pilot.py\n\n4) (Optional) Run in readonly mode for safe inspection:\nbash\nuv run --with mcp[cli] python k8s_pilot.py --readonly\n\n5) For Claude Desktop or other MCP clients, configure the mcpServers entry as shown in the mcp_config section and connect.
Additional notes
Tips and notes:\n- Readonly mode blocks create/update/delete operations for pods, deployments, services, configmaps, secrets, namespaces, and most other write operations. Use this when you need to audit or learn without modifying clusters.\n- The server supports multi-cluster management and context switching, enabling operations across several clusters from a single interface.\n- Ensure your kubeconfig is accessible to the environment running uv, or configure in-cluster access if deploying inside a cluster.\n- If you plan to integrate with Claude Desktop, use the provided MCP config snippet to launch the server within your workflow.\n- The README mentions a wide range of Kubernetes resources (Deployments, Services, Pods, ConfigMaps, Secrets, Namespaces, StatefulSets, DaemonSets, Roles, etc.); use the list/get operations freely, but exercise caution with create/update/delete in readonly mode.\n- For troubleshooting, verify that uv is correctly installed and that k8s_pilot.py is executable and reachable from the running environment.
Related MCP Servers
kubectl
A Model Context Protocol (MCP) server for Kubernetes. Install: npx kubectl-mcp-server or pip install kubectl-mcp-server
mcp-apache-spark-history
MCP Server for Apache Spark History Server. The bridge between Agentic AI and Apache Spark.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
mcp-yfinance
Real-time stock API with Python, MCP server example, yfinance stock analysis dashboard
music21
🎵 Production-ready MCP server for music analysis & generation | FastMCP • music21 • OAuth2 • Docker | First music21 MCP integration with enterprise features
mcp-kubernetes
A Model Context Protocol (MCP) server that enables AI assistants to interact with Kubernetes clusters. It serves as a bridge between AI tools (like Claude, Cursor, and GitHub Copilot) and Kubernetes, translating natural language requests into Kubernetes operations and returning the results in a format the AI tools can understand.