Get the FREE Ultimate OpenClaw Setup Guide →

compliant-llm

Build Secure and Compliant AI agents and MCP Servers. YC W23

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio fiddlecube-compliant-llm python -m compliant_llm dashboard \
  --env DISABLE_COMPLIANT_LLM_TELEMETRY="Set to true to opt-out of anonymous telemetry"

How to use

Compliant LLM MCP Server provides a security and compliance toolkit for evaluating AI agents and GenAI workflows. Once started, you can access a visual dashboard that surfaces test results, compliance checks, and risk signals across multiple providers. The server orchestrates provider integrations, runs attack simulations (such as prompt injections or policy violations), and produces actionable reports aligned with frameworks like NIST, ISO, GDPR, HIPAA, and others. Use the dashboard to configure your LLM provider(s), initiate end-to-end testing, and review detailed findings and remediation guidance. The tooling is designed to help infosec and compliance teams validate that AI systems behave safely and in compliance across different environments and providers.

How to install

Prerequisites:

  • Python 3.8+ and pip
  • Internet access to install dependencies from PyPI

Installation steps:

  1. Create and activate a virtual environment (optional but recommended): python -m venv venv

    On Windows

    venv\Scripts\activate

    On macOS/Linux

    source venv/bin/activate

  2. Install the compliant-llm package from PyPI: pip install compliant-llm

  3. Run the MCP server dashboard (this starts the MCP-enabled interface and tools): compliant-llm dashboard

  4. Optional: set environment variables before running to customize behavior (see additional notes): export DISABLE_COMPLIANT_LLM_TELEMETRY=true

Notes:

  • The CLI entry point is the compliant-llm dashboard command provided by the compliant-llm package.
  • If you containerize this, use a Python image with the same entry point (see mcp_config for a sample command).

Additional notes

Tips and common issues:

  • Telemetry: You can disable anonymized telemetry by setting DISABLE_COMPLIANT_LLM_TELEMETRY=true in your environment.
  • Provider configuration: Before running tests, configure your preferred LLM providers in the dashboard UI or via configuration files as documented in the project.
  • MCP readiness: This server integrates with multiple LLM providers and testing modules; ensure network access to provider endpoints and any required API keys or credentials.
  • Logs and debugging: If tests fail to start, check the Python environment, verify the package version, and consult the compliant-llm docs for any breaking changes in provider adapters.
  • Deployment: For production usage, consider running behind a reverse proxy, enabling authentication for the dashboard, and securing API keys in environment variables or secret managers.

Related MCP Servers

Sponsor this space

Reach thousands of developers