cupcake
A native policy enforcement layer for AI coding agents. Built on OPA/Rego.
claude mcp add --transport stdio eqtylab-cupcake node server.js \ --env PORT="3000" \ --env LOG_LEVEL="info" \ --env MCP_LOG_DIR="/var/log/cupcake"
How to use
Cupcake is a policy-enforcement layer that sits between AI agent runtimes and their actions, ensuring that tool calls and commands follow user-defined rules written in OPA/Rego and compiled to WebAssembly. It acts as an MCP (Model Context Protocol) server, exposing policy evaluation capabilities for harness integrations such as Claude Code, Cursor, Factory AI, and OpenCode. By intercepting proposed actions, enriching them with real-time signals, and evaluating them against Wasm policies, Cupcake can allow, modify, block, warn, or require human review of actions before they reach the agent runtime. This yields deterministic rule-following, improved security, and the ability to push governance decisions out of the agent’s context. The server supports both deterministic policy evaluation (Rego-to-Wasm) and an LLM-as-Judge mode for oversight, making it suitable for environments that demand strict guardrails as well as flexible oversight.
How to install
Prerequisites:
- Node.js (recommended) or a containerized runtime if running via docker
- Access to the repository hosting Cupcake (clone or install from npm)
Option A: Install via npm (recommended for Node.js projects)
- Install the package globally or in your project:
npm install -g cupcake@latest
or in your project: npm install cupcake@latest
- Start the Cupcake MCP server (example used if installed locally): npx cupcake start
- Verify the server is listening on the configured port (default 3000).
Option B: Run from source (Node.js)
- Clone the repo: git clone https://github.com/eqtylab/cupcake.git
- Navigate to the project and install dependencies: cd cupcake npm install
- Run the server: node server.js
- Ensure the server exposes the MCP API on the expected port (default 3000).
Option C: Docker (if provided)
- Build or pull the Cupcake image from the repository: docker pull eqtylab/cupcake:latest
- Run the container (example): docker run -d -p 3000:3000 --name cupcake eqtylab/cupcake:latest
- Confirm the MCP API is accessible at http://localhost:3000/
Note: Adjust PORT and other environment variables as needed for your deployment environment. See additional_notes for environment variable details.
Additional notes
Environment variables and configuration tips:
- PORT: Port on which the MCP server listens (default 3000).
- LOG_LEVEL: Logging verbosity (e.g., debug, info, warn, error).
- MCP_LOG_DIR: Directory for structured evaluation traces and logs.
- Ensure the policy modules (OPA/Rego compiled to Wasm) are available to the server at runtime.
- When integrating with multiple harnesses, provide clear policies per harness under policies/ directories (e.g., policies/claude/, policies/cursor/, policies/factory/, policies/opencode/).
- Common issues: port conflicts, missing policy WASM modules, or misconfigured environment variables leading to initialization failures. Check logs for policy evaluation errors and ensure Wasm modules are correctly compiled and loaded.
- If you need to customize the default actions, you can tune the decision behavior (Allow, Modify, Block, Warn, Require Review) via policy definitions and, where supported, by binding contextual hints to responses.
Related MCP Servers
claude-context
Code search MCP for Claude Code. Make entire codebase the context for any coding agent.
claude-talk-to-figma
A Model Context Protocol (MCP) that allows Claude Desktop and other AI tools (Claude Code, Cursor, Antigravity, etc.) to read, analyze, and modify Figma designs
systemprompt-code-orchestrator
MCP server for orchestrating AI coding agents (Claude Code CLI & Gemini CLI). Features task management, process execution, Git integration, and dynamic resource discovery. Full TypeScript implementation with Docker support and Cloudflare Tunnel integration.
swiftlens
SwiftLens is a Model Context Protocol (MCP) server that provides deep, semantic-level analysis of Swift codebases to any AI models. By integrating directly with Apple's SourceKit-LSP, SwiftLens enables AI models to understand Swift code with compiler-grade accuracy.
google-ai-mode
MCP server for free Google AI Mode search with citations. Query optimization, CAPTCHA handling, multi-agent support. Works with Claude Code, Cursor, Cline, Windsurf.
agent-security-scanner
Security scanner MCP server for AI coding agents. Prompt injection firewall, package hallucination detection (4.3M+ packages), 1000+ vulnerability rules with AST & taint analysis, auto-fix.