mcp-client-benchmark
An automated benchmark and public leaderboard for Model-Context Protocol (MCP) clients. Test your client and see how it ranks!
claude mcp add --transport stdio portal-labs-infrastructure-mcp-client-benchmark npx -y mcp-client-benchmark \ --env MCP_ENDPOINT="https://mcp-client-benchmark.remote-mcp-servers.com/mcp" \ --env MCP_LEADERBOARD="https://remote-mcp-servers.com/clients"
How to use
The MCP Client Benchmark server provides a standardized, automated evaluation environment for MCP clients. It runs a predefined series of capability checks—such as elicitation handling, dynamic message generation, resource reading, and data verification—against participating clients and scores them on a transparent rubric. Clients connect to the public leaderboard endpoint to discover the benchmark server and start a run. After a run completes, the results are surfaced on the leaderboard with a full scorecard, enabling comparison across implementations. To participate, configure your MCP client to connect to the given remote server URL, then initiate the evaluation. The server will guide your client through the checks and record the outcomes for public viewing on the leaderboard.
How to install
Prerequisites:
- Node.js (recommended LTS) and npm installed
- Internet access to fetch the MCP benchmark package
Install and run the benchmark server:
-
Ensure Node.js and npm are installed. Verify with: node -v npm -v
-
Run the benchmark server via npx (this pulls the latest package and runs it): npx -y mcp-client-benchmark
-
If you prefer to install globally or locally for development, you can install the package and run a local instance (example): npm install -g mcp-client-benchmark mcp-client-benchmark
-
Ensure the server can reach the public leaderboard and the MCP endpoint:
- Endpoint to connect clients: https://mcp-client-benchmark.remote-mcp-servers.com/mcp
- Leaderboard: https://remote-mcp-servers.com/clients
Note: The exact commands may vary slightly depending on package publishing; consult the package documentation if npx prompts for additional configuration.
Additional notes
Tips and caveats:
- The benchmark is public-facing; monitor run activity on the leaderboard to ensure your client integrations are functioning as expected.
- If you encounter network or DNS issues, verify outbound traffic to the MCP endpoint and to the leaderboard domain.
- Environment variables shown in the config can be adjusted to point to alternative endpoints or to enable debugging output (e.g., enable verbose logs in the MCP client during evaluation).
- The rubric allocates points for elicitation handling, dynamic message generation, resource reading, and data verification. Ensure your client correctly implements the server-initiated elicitation flow and uses the specified MCP actions for best scoring potential.
Related MCP Servers
Remote
A type-safe solution to remote MCP communication, enabling effortless integration for centralized management of Model Context.
codemesh
The Self-Improving MCP Server - Agents write code to orchestrate multiple MCP servers with intelligent TypeScript execution and auto-augmentation
mcp-frontend
Frontend for MCP (Model Context Protocol) Kit for Go - A Complete MCP solutions for ready to use
mcp-kit
MCP (Model Context Protocol) Kit for Go - A Complete MCP solutions for ready to use
vscode-context
MCP Server to Connect with VS Code IDE
goai
AI SDK for building AI-powered applications in Go