mcp-chat
Open Source Generic MCP Client for testing & evaluating mcp servers and agents
claude mcp add --transport stdio flux159-mcp-chat npx -y mcp-chat \ --env ANTHROPIC_API_KEY="Your Anthropic API key"
How to use
mcp-chat is an Open Source Generic MCP Client designed for testing and evaluating MCP servers and agents. It acts as a flexible frontend that can connect to a variety of MCP servers (JS, Python, Docker) and chat with models via a simulated or real backend. You can launch it to interactively chat with MCP servers, pass prompts directly, or run in web mode to manage chats through a UI. The tool supports exporting and saving chats, configuring system prompts, and inspecting tool calls and outputs shown in the chat to help you debug server behavior. To use it, install locally, then run the CLI with a server spec (for example a server you want to evaluate) and optionally a model to chat with. In addition, you can run in web mode to create and manage chats through a browser interface, with chats stored under your home directory for persistence.
How to install
Prerequisites:
- Node.js (preferred) or Bun for fast DX
- Git
- Optional: Python or Docker if you plan to run servers in those environments
Install from source:
-
Clone the repository git clone https://github.com/Flux159/mcp-chat.git cd mcp-chat
-
Install dependencies
- If you use Bun (recommended): bun install
- If you prefer npm: npm install
-
Build (if using a build step)
- For Bun projects, you can skip or run bun run build if defined in package.json
- If there is a dist/build step, run: bun run build
-
Run the CLI with a server configuration (example): npx mcp-chat --server "npx mcp-server-kubernetes" -p "List the pods in the default namespace"
Prerequisites note:
- Ensure ANTHROPIC_API_KEY is available in your environment or a .env file if you intend to test with Anthropic models.
Additional notes
Tips and common considerations:
- The tool supports passing environment variables to MCP servers. Use env mappings in your mcp_config to forward values like KUBECONFIG or other credentials.
- If running in web mode, ensure your API keys and network access are configured for the browser environment.
- For Anthropic models, export ANTHROPIC_API_KEY in your shell or .env file before running commands that use Claude family models.
- When testing local servers, you can point the client at a local server script (Node, Python uv, or a Docker container). See the README for examples of using node scripts or uv/python for Python servers.
- Use the CLI’s -p/--prompt flag to pass in prompts directly, and -m/--model to choose models when supported by the server.
- Review tool call output visibility in chat to understand how your MCP servers interpret prompts and respond.
Related MCP Servers
mcp-graphql
Model Context Protocol server for GraphQL
aws
Talk with your AWS using Claude. Model Context Protocol (MCP) server for AWS. Better Amazon Q alternative.
recall
Persistent cross-session memory for Claude & AI agents. Self-host on Redis/Valkey, or use the managed SaaS at recallmcp.com.
rohlik
MCP server that lets you shop groceries across the Rohlik Group platforms (Rohlik.cz, Knuspr.de, Gurkerl.at, Kifli.hu, Sezamo.ro)
nestjs
NestJS module for seamless Model Context Protocol (MCP) server integration using decorators.
codemesh
The Self-Improving MCP Server - Agents write code to orchestrate multiple MCP servers with intelligent TypeScript execution and auto-augmentation