csl-core
Deterministic safety layer for AI agents. Z3-verified policy enforcement.
claude mcp add --transport stdio chimera-protocol-csl-core uv run --with csl-core[mcp] csl-core-mcp
How to use
CSL-Core MCP server provides a runtime for enforcing Chimera Specification Language (CSL) policies directly from your AI assistant environment. The MCP integration lets you write, verify, and enforce safety policies without embedding the rules in your model prompts. In practice, this means you can use the CSL-Core policy engine to compile CSL policies, run formal verification, simulate scenarios, and expose these capabilities through an MCP endpoint that Claude Desktop, Cursor, or VS Code can consume. Tools exposed via the MCP server include verify_policy (compile and prove consistency), simulate_policy (evaluate policies against JSON inputs and return ALLOWED or BLOCKED), explain_policy (human-readable policy summaries), and scaffold_policy (generate CSL templates from natural language descriptions). The MCP workflow ensures enforcement happens outside the model, providing a deterministic, tamper-resistant safety layer with sub-millisecond latency in the runtime path.
How to install
Prerequisites:
- Python 3.8+ installed
- pip available
- Access to uv (Python/uv runtime) as documented by the uv/uvx project distribution
Installation steps:
-
Install the CSL-Core MCP package via pip with the[mcp] extras to enable MCP support:
pip install "csl-core[mcp]"
-
Verify installation by testing the CLI tools locally (these are the CSL tools used in MCP workflows):
cslcore verify my_policy.csl cslcore simulate my_policy.csl --input '{"action": "READ"}'
-
Configure the MCP server in Claude Desktop (or your editor of choice) to point to the uv-based MCP runtime. Use the following example configuration:
{ "mcpServers": { "csl-core": { "command": "uv", "args": ["run", "--with", "csl-core[mcp]", "csl-core-mcp"] } } }
-
Start the MCP server via your environment (the uv command will launch the policy MCP runtime). Ensure the host/port or IPC method used by Claude Desktop is aligned with your environment.
Note: If you are integrating with Claude Desktop, place the provided JSON under the Claude Desktop config path (e.g., ~/Library/Application Support/Claude/claude_desktop_config.json on macOS) and restart Claude Desktop to load the MCP server.
Additional notes
Tips and known considerations:
- The MCP server executes CSL policy checks outside the LLM, providing deterministic enforcement. Prompt injections have no effect on the runtime enforcement.
- Exposed tools (verify_policy, simulate_policy, explain_policy, scaffold_policy) help you develop and validate policies before deploying them via MCP.
- If you update a CSL policy, re-run verify_policy and simulate_policy to ensure the changes remain consistent and pass all tests.
- Environment variables or runtime context can be injected at the tool layer when using LangChain integrations or other orchestration layers to provide user_role, environment, rate limits, etc., to ensure policy checks remain robust against context manipulation.
- When debugging, check MCP runtime logs for latency and policy evaluation traces; the runtime is designed to be <1 ms per evaluation, but integration overhead may vary by environment.
Related MCP Servers
template-repo
Agent orchestration & security template featuring MCP tool building, agent2agent workflows, mechanistic interpretability on sleeper agents, and agent integration via CLI wrappers
langgraph-ai
LangGraph AI Repository
mcp-playground
A Streamlit-based chat app for LLMs with plug-and-play tool support via Model Context Protocol (MCP), powered by LangChain, LangGraph, and Docker.
MCP-MultiServer-Interoperable-Agent2Agent-LangGraph-AI-System
This project demonstrates a decoupled real-time agent architecture that connects LangGraph agents to remote tools served by custom MCP (Modular Command Protocol) servers. The architecture enables a flexible and scalable multi-agent system where each tool can be hosted independently (via SSE or STDIO), offering modularity and cloud-deployable execut
mcp -templates
A flexible platform that provides Docker & Kubernetes backends, a lightweight CLI (mcpt), and client utilities for seamless MCP integration. Spin up servers from templates, route requests through a single endpoint with load balancing, and support both deployed (HTTP) and local (stdio) transports — all with sensible defaults and YAML-based configs.
AI-web mode
一个基于 MCP (Model Context Protocol) 的智能对话助手Web应用,支持实时聊天、工具调用和对话历史管理。