Get the FREE Ultimate OpenClaw Setup Guide →

atla

An MCP server implementation providing a standardized interface for LLMs to interact with the Atla API.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio atla-ai-atla-mcp-server uvx atla-mcp-server \
  --env ATLA_API_KEY="<your-atla-api-key>"

How to use

The Atla MCP server provides a standardized interface for evaluating LLM outputs against Atla’s evaluation criteria. It exposes two main tools: evaluate_llm_response, which scores a single LLM response against a given prompt and criteria and returns a numeric score plus a textual critique; and evaluate_llm_response_on_multiple_criteria, which runs the evaluation across multiple criteria and returns a list of results, each containing a score and critique. To use the server, you must supply an Atla API key, which authenticates requests to the Atla evaluation models. Once connected, you can integrate the MCP server with clients that support the MCP protocol (for example, OpenAI Agents SDK, Claude Desktop, or Cursor) to submit prompts and receive structured evaluation feedback.

Typical workflow:

  • Provide a prompt and an LLM response to evaluate.
  • Choose one or more evaluation criteria (for example, relevance, correctness, safety, or other Atla-defined metrics).
  • Retrieve a score and a critique that explains why the score was assigned. For multi-criteria runs, review the aggregated results to understand trade-offs across criteria.

The server is designed to be used in environments that can host MCP servers and connect via the standard MCP tooling. You can connect using the Atla API key with uvx, then point your client (OpenAI Agents SDK, Claude Desktop, Cursor, etc.) at the atla-mcp-server endpoint to begin evaluating prompts and responses.

How to install

Prerequisites:

  • A supported Python environment and uv (as recommended by Atla/docs).
  • An Atla API key (required to perform evaluations).

Installation steps:

  1. Install uv (Python environment manager) if you don’t have it:
# Example for Unix-like systems
curl -L https://get.uv.sh | sh
  1. Ensure you have Python installed and accessible in your PATH. You can verify with:
python --version
  1. Obtain your Atla API key from the Atla website (sign-in or sign-up): https://www.atla-ai.com/sign-in or https://www.atla-ai.com/sign-up

  2. Run the MCP server using uvx with your API key:

ATLA_API_KEY=<your-atla-api-key> uvx atla-mcp-server

This will start the Atla MCP server and register it under the name atla-mcp-server for MCP clients to connect to. If you need to run it in a different environment (e.g., inside a container or via a configuration manager), use the same command structure but adjust deployment specifics accordingly.

Additional notes

Notes and tips:

  • The repository and API were archived as of July 21, 2025, so the Atla API may be deprecated or not actively maintained. Expect potential changes or lack of support.
  • Securely manage your ATLA_API_KEY; do not hard-code it in source files. Use environment variables or secret managers in deployment environments.
  • When connecting from different MCP clients (OpenAI Agents SDK, Claude Desktop, Cursor), ensure the client configuration points to your running atla-mcp-server and passes the ATLA_API_KEY as an environment variable.
  • If you encounter connection issues, verify that uvx is correctly installed and that your API key is valid. Check firewall rules and network connectivity to the Atla API endpoints.
  • The available tools focus on evaluation tasks. There are two primary operations: evaluate_llm_response for single-criteria evaluation and evaluate_llm_response_on_multiple_criteria for multi-criteria evaluation. Use the appropriate tool based on your evaluation needs.

Related MCP Servers

Sponsor this space

Reach thousands of developers