ai-infrastructure-agent
AI Infrastructure Agent is an intelligent system that allows you to manage AWS infrastructure using natural language commands.
claude mcp add --transport stdio versuscontrol-ai-infrastructure-agent docker run -d --name ai-infrastructure-agent -p 8080:8080 -v ./config.yaml:/app/config.yaml:ro -v ./states:/app/states -e OPENAI_API_KEY=your-openai-api-key-here -e AWS_ACCESS_KEY_ID=your-aws-access-key -e AWS_SECRET_ACCESS_KEY=your-aws-secret-key -e AWS_DEFAULT_REGION=us-west-2 ghcr.io/versuscontrol/ai-infrastructure-agent \ --env GEMINI_API_KEY="Your Gemini API key (optional)" \ --env OPENAI_API_KEY="Your OpenAI API key (for OpenAI provider) or placeholder" \ --env ANTHROPIC_API_KEY="Your Anthropic API key (optional)" \ --env AWS_ACCESS_KEY_ID="AWS access key ID" \ --env OLLAMA_SERVER_URL="Ollama server URL (optional, default http://localhost:11434)" \ --env AWS_DEFAULT_REGION="AWS region (e.g., us-west-2)" \ --env AWS_SECRET_ACCESS_KEY="AWS secret access key"
How to use
AI Infrastructure Agent is an MCP server that provides an intelligent, natural-language interface to design, plan, and execute AWS infrastructure changes. The server exposes a web dashboard and an execution workflow that analyzes a user request, determines the required AWS resources, and coordinates actions with the underlying cloud APIs while supporting safety features like dry-run and conflict resolution. You can connect via Docker deployment or other supported runtimes described in the docs, configure AI providers (OpenAI, Gemini, Anthropic, AWS Bedrock Nova, or Ollama), and supply AWS credentials to enable provisioning tasks such as VPCs, EC2 instances, security groups, and autoscaling groups. Use the web dashboard to review execution plans before approving them, monitor progress in real time, and inspect results after completion.
How to install
Prerequisites:
- Docker installed on your host (for the Docker deployment method described here)
- Access credentials for your AI provider (API keys) and AWS credentials
- Git for cloning or downloading the repository (optional, if you assemble from sources)
Step-by-step installation (Docker-based):
-
Pull or build the container image (example uses ghcr.io/versuscontrol/ai-infrastructure-agent):
- Ensure you have Docker installed and running
-
Prepare configuration and state directories:
- Create a config.yaml with your agent settings (provider, model, dry_run, etc.)
- Create a states directory to persist infrastructure state between runs
-
Run the server with Docker: docker run -d
--name ai-infrastructure-agent
-p 8080:8080
-v $(pwd)/config.yaml:/app/config.yaml:ro
-v $(pwd)/states:/app/states
-e OPENAI_API_KEY="your-openai-api-key-here"
-e AWS_ACCESS_KEY_ID="your-aws-access-key"
-e AWS_SECRET_ACCESS_KEY="your-aws-secret-key"
-e AWS_DEFAULT_REGION="us-west-2"
ghcr.io/versuscontrol/ai-infrastructure-agent -
Optional: Use Docker Compose (recommended in docs). Create a docker-compose.yml mirroring the environment and volumes, then run: docker-compose up -d docker-compose logs -f
-
Verify deployment by visiting the web dashboard (default port 8080) and testing a simple request such as listing available resources or running a dry-run plan.
Prerequisites recap:
- Docker or an alternative MCP runtime compatible with the project
- API keys for chosen AI provider(s)
- AWS credentials and region configuration
- A proper config.yaml file to steer the agent’s behavior (provider, model, dry_run, etc.)
Additional notes
Tips and common considerations:
- Start with dry_run enabled to validate plans before making changes (config.yaml setting dry_run: true).
- Ensure AWS credentials have appropriate permissions for the intended actions.
- If using multiple AI providers, only one provider key is required at a time per run; you can switch in config.yaml.
- When mounting config.yaml and states in Docker, use absolute paths or bind mounts appropriate to your environment.
- Monitor logs with docker-compose logs -f or docker logs -f ai-infrastructure-agent to diagnose issues quickly.
- Keep your config.yaml secure and avoid embedding secrets in version control; prefer environment variable-based keys in deployment scripts.
- If you encounter networking issues, verify that port 8080 is accessible and that the container has network access to AWS endpoints and provider APIs.
Related MCP Servers
kagent
Cloud Native Agentic AI | Discord: https://bit.ly/kagentdiscord
ai-dev-tools-zoomcamp
AI Dev Tools Zoomcamp is a free course that helps you use AI tools to write better code, faster. We're starting the first cohort of this course on November 18, 2025! Sign up here to join us 👇🏼
MCPJungle
Self-hosted MCP Gateway for AI agents
CyberStrikeAI
CyberStrikeAI is an AI-native security testing platform built in Go. It integrates 100+ security tools, an intelligent orchestration engine, role-based testing with predefined security roles, a skills system with specialized testing skills, and comprehensive lifecycle management capabilities.
station
Station is our open-source runtime that lets teams deploy agents on their own infrastructure with full control.
mcp-reticle
Reticle intercepts, visualizes, and profiles JSON-RPC traffic between your LLM and MCP servers in real-time, with zero latency overhead. Stop debugging blind. Start seeing everything.