ironcurtain
A secure* runtime for autonomous AI agents. Policy from plain-English constitutions. (*https://ironcurtain.dev)
claude mcp add --transport stdio provos-ironcurtain npx -y @provos/ironcurtain \ --env OPENAI_API_KEY="your-openai-api-key" \ --env ANTHROPIC_API_KEY="your-anthropic-api-key" \ --env GOOGLE_GENERATIVE_AI_API_KEY="your-google-api-key"
How to use
IronCurtain is an MCP server that enforces security policies for autonomous AI agents by translating human-readable constitutional rules into deterministic runtime constraints. The MCP layer sits between the agent and its tools (filesystem, git, networking, etc.), inspecting each tool call and deciding whether to allow, deny, or escalate for user approval. This enables you to run agents that can operate with autonomy while respecting boundaries you define in natural language. The server can be used in different modes (built-in agent in a Dockerless setup or a Docker-isolated external agent) and exposes a policy-driven interface for tool calls and escalations. With the MCP engine in place, you’ll see tool calls blocked by default or escalated for approval unless your policy explicitly permits them, providing a robust guardrail without scripting hard-coded checks.
To use IronCurtain, start the MCP server via the recommended distribution channel (for example, via NPX as a quick start). Once running, you’ll interact with it through its policy engine which inspects each tool call from the agent and either allows it, denies it, or escalates to you for approval. In mux (terminal multiplexer) mode you gain a full interactive TUI for managing multiple sessions and escalations, while the builtin agent mode lets you run tasks directly without Docker. The tooling is designed so that human-readable security intents (the constitution) are compiled into deterministic rules that the MCP server enforces at runtime.
How to install
Prerequisites:
- Node.js 22+ (maximum supported Node version can be limited by dependencies; see project docs)
- npm (comes with Node.js)
- Optional: Docker for Docker Agent Mode
Installation steps (quickstart):
-
Install the MCP server globally via NPX on-demand (no permanent install required): npm install -g @provos/ironcurtain
-
Or run the MCP server directly from the package without global install using NPX: npx -y @provos/ironcurtain
-
If you prefer to run from source, clone the repository and install dependencies: git clone https://github.com/provos/ironcurtain.git cd ironcurtain npm install
-
Set up your API keys and configuration as described in the README (see environment variables below).
-
Start the server in your preferred mode (mux, builtin agent, etc.) as documented in the project README.
Prerequisites (summary): ensure Node.js is available, npm is installed, and you have API keys for at least one LLM provider (Anthropic, Google, or OpenAI). If you plan to use Docker Agent Mode, have Docker installed and running.
Additional notes
Notes and tips:
- Environment variables: ANTHROPIC_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENAI_API_KEY are supported for configuring LLM providers. Environment variables take precedence over config files.
- If you’re running in Docker mode, you’ll get stronger isolation but may need to configure network and TLS settings as per the SANDBOXING.md and RUNNING_MODES.md docs.
- The policy engine uses a constitution-based approach to derive deterministic rules. Changes to the constitution may alter which tool calls are allowed or escalated; test changes thoroughly in a staging environment.
- For first-time setup, run ironcurtain setup to configure GitHub tokens, web search provider, and model preferences as part of the wizard.
- If you encounter permission or isolation issues, ensure that the runtime has access to your environment variables and that any required Docker daemon permissions are granted.
- This MCP server relies on MCP tooling and standard LLM providers; keep dependencies up to date to avoid breaking policy enforcement.
Related MCP Servers
metorial
Connect any AI model to 600+ integrations; powered by MCP 📡 🚀
codemesh
The Self-Improving MCP Server - Agents write code to orchestrate multiple MCP servers with intelligent TypeScript execution and auto-augmentation
mcp-web-search-tool
A MCP server providing real-time web search capabilities to any AI model.
mcp-auth s
🔒 Reference MCP servers that demo how authentication works with the current Model Context Protocol spec.
mcp-reporter
mcp-reporter is a streamlined utility that generates comprehensive capability reports for Model Context Protocol servers, empowering developers to easily understand available functionality across their MCP servers ecosystem for both documentation and integration into other tools.
codemode
Programmatic tool calling / Code Mode for MCP — turn any OpenAPI spec into two sandboxed tools (search + execute).