Get the FREE Ultimate OpenClaw Setup Guide →

llamator

MCP server for llamator: automate LLM red teaming workflows

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio romiconez-llamator-mcp-server docker compose up --build

How to use

LLAMATOR MCP server provides a Streamable HTTP transport layered on top of a FastAPI application. It exposes two integration surfaces: the HTTP API for submitting and monitoring LLAMATOR-based red teaming runs, and the MCP interface mounted inside the FastAPI app that lets external tooling invoke LLAMATOR runs as MCP tools. The system orchestrates jobs asynchronously via ARQ with Redis, and artifacts produced by LLAMATOR are stored in MinIO and retrieved via presigned URLs. You can interact with the HTTP API to create runs, check status, and fetch artifacts, or leverage the MCP tools create_llamator_run and get_llamator_run to integrate LLAMATOR runs into your tooling pipelines using the standardized MCP protocol.

To use the MCP interface, connect to the MCP endpoint mounted at /mcp in the running FastAPI service. The exposed tools include create_llamator_run, which submits a run, waits for completion, and returns aggregated metrics along with an artifacts download URL when available; and get_llamator_run, which retrieves the metrics for a finished run and the optional artifacts URL. The API supports optional API key protection via X-API-Key and exposes OpenAPI for exploration. The complete stack relies on Redis for durable state, MinIO for artifacts, and a configurable OpenAI-like model interface for LLAMATOR execution.

How to install

Prerequisites:

  • Docker and Docker Compose installed on your machine
  • Basic familiarity with running services in containers

Installation steps:

  1. Clone the repository: git clone <repository-url> llamator-mcp-server cd llamator-mcp-server

  2. Copy the example environment configuration (if provided) and tailor it to your environment. For local development, you can typically copy the example env file: cp .env.example .env

    Edit .env to configure Redis DSN, MinIO, API keys, etc.

  3. Start the services with Docker Compose: docker compose up --build

  4. Verify the services are running:

  5. Optional: run the local development stack without Docker (if you prefer a Python-based setup):

    Install dependencies

    poetry install

    Run the API server

    uvicorn llamator_mcp_server.main:app --host 0.0.0.0 --port 8000

    Run the worker (ARQ) for LLAMATOR executions

    arq llamator_mcp_server.worker_settings.WorkerSettings

Prerequisites recap:

  • Docker and Docker Compose for the recommended Docker-based deployment
  • Optional Python tooling (Poetry, uvicorn) for local development without containers

Additional notes

Notes and tips:

  • The MCP interface uses Streamable HTTP transport; ensure your tooling can send/receive the expected RPC payloads.
  • If you enable API key protection, supply X-API-Key headers when invoking both HTTP and MCP endpoints.
  • Artifacts are stored under MinIO and accessed via presigned URLs; the API does not redirect for artifact downloads.
  • Healthcheck endpoint is typically at /v1/health on the HTTP API; monitor /metrics for Prometheus metrics as needed.
  • Environment variables prefixed with LLAMATOR_MCP_ control Redis, MinIO, API security, and LLAMATOR execution endpoints; see DOCUMENTATION.md for a complete reference.
  • When running locally with Docker Compose, ensure port mappings (default 8000, 9000, 9001) do not conflict with other services.

Related MCP Servers

Sponsor this space

Reach thousands of developers