mcp_chatbot
A chatbot implementation compatible with MCP (terminal / streamlit supported)
claude mcp add --transport stdio keli-wen-mcp_chatbot uv --directory /path/to/your/project/mcp_servers run markdown_processor.py
How to use
The MCPChatbot example demonstrates how to integrate an MCP server pipeline with a customizable LLM (for example, Qwen) to build a chatbot that can call external tools through MCP servers. The project includes a built-in MCP server for Markdown processing and provides multiple interfaces: a simple CLI chatbot, two interactive terminal chat modes (regular and streaming), and a Streamlit web chatbot that visualizes MCP tool workflows. You can run single-prompt scenarios or engage in multi-turn conversations where the LLM optimizes calls to tools like the Markdown processor, all coordinated via MCP messages and a structured workflow trace.
How to install
Prerequisites:
- Python 3.10+
- Git
- (Optional) uv (Python wrapper for fast environment management)
Installation steps:
- Clone the repository and navigate into it:
git clone git@github.com:keli-wen/mcp_chatbot.git cd mcp_chatbot - Set up a virtual environment and install dependencies:
# Create and activate a venv (example for macOS/Linux) uv venv .venv --python=3.10 source .venv/bin/activate # Install dependencies pip install -r requirements.txt # If you prefer uv for faster installs uv pip install -r requirements.txt - Configure environment variables (example):
cp .env.example .env # Edit .env to set your LLM API keys and paths - Ensure MCP server configuration is set to use the local uv path and the correct project paths in:
- mcp_servers/servers_config.json Example entry:
{ "mcpServers": { "markdown_processor": { "command": "/path/to/your/uv", "args": [ "--directory", "/path/to/your/project/mcp_servers", "run", "markdown_processor.py" ] } } } - (Optional) Run quick checks:
bash scripts/check.sh - Run unit tests (optional):
bash scripts/unittest.sh
Additional notes
Tips and common issues:
- Ensure paths in servers_config.json are absolute and correct for your environment.
- For Windows, refer to the Troubleshooting example in the README and adjust path formatting accordingly.
- Set the environment variables in .env to provide LLM API keys, model names, and folder paths used by the Markdown processor and result storage.
- The MCP pipeline supports multiple MCP tool calls per prompt and multi-turn chats; tune the LLM and tool configurations to match your use case.
- If you encounter API key errors, verify that LLM_BASE_URL and LLM_API_KEY (and related OLLAMA variables if using Ollama) are correctly set in .env.
- Use the provided examples (single_prompt, chatbot_terminal, chatbot_streamlit) to validate integration before extending with your own tools.
Related MCP Servers
mcp
Official MCP Servers for AWS
web-eval-agent
An MCP server that autonomously evaluates web applications.
mcp-neo4j
Neo4j Labs Model Context Protocol servers
Gitingest
mcp server for gitingest
fhir
FHIR MCP Server – helping you expose any FHIR Server or API as a MCP Server.
unitree-go2
The Unitree Go2 MCP Server is a server built on the MCP that enables users to control the Unitree Go2 robot using natural language commands interpreted by a LLM.