scorable
MCP for Scorable Evaluation Platform
claude mcp add --transport stdio root-signals-scorable-mcp docker run -e SCORABLE_API_KEY=<your_key> -p 0.0.0.0:9090:9090 --name=rs-mcp -d ghcr.io/scorable/scorable-mcp:latest \ --env SCORABLE_API_KEY="<your_key>"
How to use
Scorable MCP Server exposes a set of evaluative tools from Scorable as MCP endpoints. It provides an SSE transport endpoint at /sse (and a newer /mcp path for newer clients) that you can connect to with any MCP-compatible client. Before connecting, you need a Scorable API key (sign up at scorable.ai/settings/api-keys or use a temporary key). The server exposes tools such as list_evaluators, run_evaluation, run_evaluation_by_name, run_coding_policy_adherence, list_judges, and run_judge. You can integrate these tools into your agent or workflow to evaluate and improve responses using Scorable’s quality criteria. For deployment, the README demonstrates how to run the server in Docker and how to connect via SSE, including a Cursor config example. If you’re using a local stdio transport, you can run the MCP server from your host and connect through a stdio channel using the uvx-based approach with a git+ URL pointing to the Scorable MCP repository.
How to install
Prerequisites:
- Docker installed and running on your host
- A Scorable API key (sign-up at scorable.ai/settings/api-keys or use a temporary key)
- Internet access to pull the container image from GitHub Container Registry (ghcr.io)
Installation steps (Docker):
- Obtain your Scorable API key and replace <your_key> in the command below, or keep it as an environment variable to inject at runtime.
- Run the MCP server: docker run -e SCORABLE_API_KEY=<your_key> -p 0.0.0.0:9090:9090 --name=rs-mcp -d ghcr.io/scorable/scorable-mcp:latest
- Verify the server is up by fetching logs: docker logs rs-mcp You should see lines indicating the SSE or MCP endpoints are listening, for example: SSE server listening on http://0.0.0.0:9090/sse
Alternative (stdio/stdin transport via uvx):
- Install uvx if you don’t have it and ensure Git is available.
- Run the MCP server from your project using stdio transport: { "mcpServers": { "scorable": { "command": "uvx", "args": ["--from", "git+https://github.com/scorable/scorable-mcp.git", "stdio"], "env": { "SCORABLE_API_KEY": "<your_api_key>" } } } }
- Ensure the API key is provided via the SCORABLE_API_KEY environment variable.
Prerequisites recap:
- Docker (recommended) or uvx-powered stdio transport as alternative
- A valid SCORABLE_API_KEY
- Network access to pull the scorable-mcp image or access to the repository for stdio transport
Additional notes
Notes and tips:
- The docker setup exposes the MCP server on port 9090. Use 0.0.0.0:9090 for broad accessibility, or adjust as needed for firewall rules.
- The server supports both /sse and /mcp endpoints; /mcp is the newer preferred endpoint, while /sse remains for backward compatibility.
- If you’re using Cursor or another MCP client, you can add the server like: { "mcpServers": { "scorable": { "url": "http://localhost:9090/sse" } } }.
- When using stdio transport, you’ll fetch evaluators and run evaluations through the Python/JS client examples provided in the README.
- If you encounter API key issues, ensure the key has the necessary scopes and that it is passed to the container/environment correctly.
- The available tools include: list_evaluators, run_evaluation, run_evaluation_by_name, run_coding_policy_adherence, list_judges, run_judge. You can chain the outputs to select evaluators by ID or name and pass optional contexts as needed.
Related MCP Servers
mcp-checkpoint
MCP Checkpoint continuously secures and monitors Model Context Protocol operations through static and dynamic scans, revealing hidden risks in agent-to-tool communications.
mcp-playground
A Streamlit-based chat app for LLMs with plug-and-play tool support via Model Context Protocol (MCP), powered by LangChain, LangGraph, and Docker.
registry
The BioContextAI Registry for biomedical MCP servers
knowledgebase
BioContextAI Knowledgebase MCP server for biomedical agentic AI
proxy-base-agent
A stateful agent with 100% reliable tool use. Build custom agents on any LLM with guaranteed state consistency.
llm-bridge
A model-agnostic Message Control Protocol (MCP) server that enables seamless integration with various Large Language Models (LLMs) like GPT, DeepSeek, Claude, and more.