Get the FREE Ultimate OpenClaw Setup Guide →

k8s_mcp_server_prod

MCP server from samcolon/k8s_mcp_server_prod

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio samcolon-k8s_mcp_server_prod python -m uvicorn main:app --host 0.0.0.0 --port 8000 \
  --env KUBECONFIG="/path/to/kubeconfig" \
  --env GEMINI_API_KEY="your-gemini-api-key" \
  --env GEMINI_API_ENDPOINT="https://gemini.googleapis.com/v1"

How to use

This MCP server provides a natural-language interface for interacting with a Kubernetes cluster. It uses a FastAPI backend that serves an MCP schema describing the supported commands, and it leverages Google's Gemini model via kubectl-ai to translate natural language prompts into Kubernetes operations. Once the MCP server is running, you can connect your kubectl-ai client to the endpoint and start issuing high-level prompts like listing pods, scaling deployments, or inspecting cluster resources. The server exposes an MCP schema JSON at /mcp-schema.json which clients can fetch to understand the available commands and their required parameters. To use it in a workflow, configure kubectl-ai with the endpoint and your Gemini API credentials, then issue prompts such as: “List all pods in the default namespace”, “Scale my-website-app to 5 replicas”, or “Get the status of all nodes.” The integration abstracts Kubernetes complexity behind concise, NL prompts while preserving the ability to perform precise actions when needed.

How to install

Prerequisites:

  • Ubuntu 22.04+ or compatible Linux environment
  • Python 3.9+ and pip
  • Access to a Kubernetes cluster (local Minikube or remote) and kubectl installed
  • Gemini API access key

Install steps:

  1. Prepare environment
sudo apt-get update
sudo apt-get install -y python3 python3-pip git
  1. Clone the MCP server repository
git clone https://github.com/your-organization/k8s-mcp-server-prod.git
cd k8s-mcp-server-prod
  1. Install Python dependencies
pip3 install -r requirements.txt
  1. Configure environment variables
  • Create a config file or export variables:
export GEMINI_API_KEY=your-key
export GEMINI_API_ENDPOINT=https://gemini.googleapis.com/v1
export KUBECONFIG=~/.kube/config
  1. Run the MCP server
python -m uvicorn main:app --host 0.0.0.0 --port 8000
  1. (Optional) Run inside Docker or Kubernetes for production
  • Build a container with the Python runtime and uvicorn startup, then deploy to your cluster or container platform.
  • Expose the service via a NodePort or LoadBalancer, and ensure the MCP schema endpoint is reachable at http://<host>:8000/mcp-schema.json

Additional notes

Tips and caveats:

  • The MCP schema at /mcp-schema.json defines the supported prompts and their required parameters; fetch it before crafting prompts to ensure valid commands.
  • Ensure the Gemini API key has the required permissions and that the endpoint is accessible from your deployment environment.
  • If you’re running in Kubernetes, consider mounting the kubeconfig as a secret and referencing it via an environment variable or volume.
  • For production, run uvicorn with a process manager (e.g., gunicorn) and behind an HTTPS ingress to protect API traffic.
  • When prompts fail or return unexpected Kubernetes actions, enable verbose logs in FastAPI to diagnose translation failures from NL prompts to Kubernetes commands.

Related MCP Servers

Sponsor this space

Reach thousands of developers