compliant-llm
Build Secure and Compliant AI agents and MCP Servers. YC W23
claude mcp add --transport stdio fiddlecube-compliant-llm python -m compliant_llm dashboard \ --env DISABLE_COMPLIANT_LLM_TELEMETRY="Set to true to opt-out of anonymous telemetry"
How to use
Compliant LLM MCP Server provides a security and compliance toolkit for evaluating AI agents and GenAI workflows. Once started, you can access a visual dashboard that surfaces test results, compliance checks, and risk signals across multiple providers. The server orchestrates provider integrations, runs attack simulations (such as prompt injections or policy violations), and produces actionable reports aligned with frameworks like NIST, ISO, GDPR, HIPAA, and others. Use the dashboard to configure your LLM provider(s), initiate end-to-end testing, and review detailed findings and remediation guidance. The tooling is designed to help infosec and compliance teams validate that AI systems behave safely and in compliance across different environments and providers.
How to install
Prerequisites:
- Python 3.8+ and pip
- Internet access to install dependencies from PyPI
Installation steps:
-
Create and activate a virtual environment (optional but recommended): python -m venv venv
On Windows
venv\Scripts\activate
On macOS/Linux
source venv/bin/activate
-
Install the compliant-llm package from PyPI: pip install compliant-llm
-
Run the MCP server dashboard (this starts the MCP-enabled interface and tools): compliant-llm dashboard
-
Optional: set environment variables before running to customize behavior (see additional notes): export DISABLE_COMPLIANT_LLM_TELEMETRY=true
Notes:
- The CLI entry point is the compliant-llm dashboard command provided by the compliant-llm package.
- If you containerize this, use a Python image with the same entry point (see mcp_config for a sample command).
Additional notes
Tips and common issues:
- Telemetry: You can disable anonymized telemetry by setting DISABLE_COMPLIANT_LLM_TELEMETRY=true in your environment.
- Provider configuration: Before running tests, configure your preferred LLM providers in the dashboard UI or via configuration files as documented in the project.
- MCP readiness: This server integrates with multiple LLM providers and testing modules; ensure network access to provider endpoints and any required API keys or credentials.
- Logs and debugging: If tests fail to start, check the Python environment, verify the package version, and consult the compliant-llm docs for any breaking changes in provider adapters.
- Deployment: For production usage, consider running behind a reverse proxy, enabling authentication for the dashboard, and securing API keys in environment variables or secret managers.
Related MCP Servers
web-agent-protocol
🌐Web Agent Protocol (WAP) - Record and replay user interactions in the browser with MCP support
google_ads_mcp
The Google Ads MCP Server is an implementation of the Model Context Protocol (MCP) that enables Large Language Models (LLMs), such as Gemini, to interact directly with the Google Ads API.
mcp-gateway
MCP Gateway and Registry
MCP-Dandan
MCP Security Solution for Agentic AI — real-time proxying, behavior analysis, and malicious tool detection
pentesting s-checklist
A practical, community-driven checklist for pentesting MCP servers. Covers traffic analysis, tool-call behavior, namespace abuse, auth flows, and remote server risks. Maintained by Appsecco and licensed for remixing.
AI-SOC-Agent
Blackhat 2025 presentation and codebase: AI SOC agent & MCP server for automated security investigation, alert triage, and incident response. Integrates with ELK, IRIS, and other platforms.