Get the FREE Ultimate OpenClaw Setup Guide →

mcp_chatbot

A chatbot implementation compatible with MCP (terminal / streamlit supported)

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio keli-wen-mcp_chatbot uv --directory /path/to/your/project/mcp_servers run markdown_processor.py

How to use

The MCPChatbot example demonstrates how to integrate an MCP server pipeline with a customizable LLM (for example, Qwen) to build a chatbot that can call external tools through MCP servers. The project includes a built-in MCP server for Markdown processing and provides multiple interfaces: a simple CLI chatbot, two interactive terminal chat modes (regular and streaming), and a Streamlit web chatbot that visualizes MCP tool workflows. You can run single-prompt scenarios or engage in multi-turn conversations where the LLM optimizes calls to tools like the Markdown processor, all coordinated via MCP messages and a structured workflow trace.

How to install

Prerequisites:

  • Python 3.10+
  • Git
  • (Optional) uv (Python wrapper for fast environment management)

Installation steps:

  1. Clone the repository and navigate into it:
    git clone git@github.com:keli-wen/mcp_chatbot.git
    cd mcp_chatbot
    
  2. Set up a virtual environment and install dependencies:
    # Create and activate a venv (example for macOS/Linux)
    uv venv .venv --python=3.10
    source .venv/bin/activate
    # Install dependencies
    pip install -r requirements.txt
    # If you prefer uv for faster installs
    uv pip install -r requirements.txt
    
  3. Configure environment variables (example):
    cp .env.example .env
    # Edit .env to set your LLM API keys and paths
    
  4. Ensure MCP server configuration is set to use the local uv path and the correct project paths in:
    • mcp_servers/servers_config.json Example entry:
    {
      "mcpServers": {
        "markdown_processor": {
          "command": "/path/to/your/uv",
          "args": [
            "--directory",
            "/path/to/your/project/mcp_servers",
            "run",
            "markdown_processor.py"
          ]
        }
      }
    }
    
  5. (Optional) Run quick checks:
    bash scripts/check.sh
    
  6. Run unit tests (optional):
    bash scripts/unittest.sh
    

Additional notes

Tips and common issues:

  • Ensure paths in servers_config.json are absolute and correct for your environment.
  • For Windows, refer to the Troubleshooting example in the README and adjust path formatting accordingly.
  • Set the environment variables in .env to provide LLM API keys, model names, and folder paths used by the Markdown processor and result storage.
  • The MCP pipeline supports multiple MCP tool calls per prompt and multi-turn chats; tune the LLM and tool configurations to match your use case.
  • If you encounter API key errors, verify that LLM_BASE_URL and LLM_API_KEY (and related OLLAMA variables if using Ollama) are correctly set in .env.
  • Use the provided examples (single_prompt, chatbot_terminal, chatbot_streamlit) to validate integration before extending with your own tools.

Related MCP Servers

Sponsor this space

Reach thousands of developers