AI-Customer-Support-Bot-
MCP server from ChiragPatankar/AI-Customer-Support-Bot--MCP-Server
claude mcp add --transport stdio chiragpatankar-ai-customer-support-bot--mcp-server python app.py \ --env SECRET_KEY="your-super-secret-key" \ --env DATABASE_URL="postgresql://user:password@localhost/customer_support_bot" \ --env AI_SERVICE_MODEL="gpt-4" \ --env RATE_LIMIT_PERIOD="60" \ --env AI_SERVICE_API_KEY="your-ai-service-api-key" \ --env RATE_LIMIT_REQUESTS="100"
How to use
This MCP server implements a Python FastAPI-based AI customer support assistant. It exposes MCP-compliant endpoints to process single queries and batch requests, enabling AI-driven responses for customer inquiries. Typical usage includes querying the server health, sending user questions, and handling batched queries for efficiency. The API supports an authentication token via the MCP header and returns structured responses with the generated AI reply, confidence, and timing metrics. You can integrate it into customer support workflows, chat widgets, or back-office tooling by calling /mcp/health, /mcp/process, and /mcp/batch as defined in the API Reference. The server is designed to be AI-provider agnostic, so you can swap in OpenAI, Anthropic, or other providers through the service layer with minimal changes.
How to install
Prerequisites:
- Python 3.8+ (often via a virtual environment)
- PostgreSQL database
- Git
Steps:
-
Clone the repository git clone https://github.com/ChiragPatankar/AI-Customer-Support-Bot--MCP-Server.git cd AI-Customer-Support-Bot--MCP-Server
-
Create and activate a virtual environment python -m venv venv
On macOS/Linux
source venv/bin/activate
On Windows
venv\Scripts\activate
-
Install dependencies pip install -r requirements.txt
-
Configure environment cp .env.example .env edit .env with your configuration (DATABASE_URL, SECRET_KEY, RATE_LIMITS, AI service keys, etc.)
-
Set up the database createdb customer_support_bot
or migrate using your preferred ORM/migration tool depending on setup
-
Run the server python app.py
Server should be available at http://localhost:8000
Additional notes
Environment variables:
- DATABASE_URL: Postgres connection string
- SECRET_KEY: Secret for token generation and security
- RATE_LIMIT_REQUESTS and RATE_LIMIT_PERIOD: Configure MCP rate limiting
- AI_SERVICE_API_KEY and AI_SERVICE_MODEL: Credentials for the chosen AI provider Common issues:
- Ensure PostgreSQL is running and accessible
- Validate that the .env file is loaded by the application
- If using a containerized setup later, consider exposing the proper ports and mounting environment vars Configuration options:
- You can adjust rate limits, AI provider, and security settings via environment variables or config files as supported by the startup script.
Related MCP Servers
mcp-vegalite
MCP server from isaacwasserman/mcp-vegalite-server
github-chat
A Model Context Protocol (MCP) for analyzing and querying GitHub repositories using the GitHub Chat API.
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
pagerduty
PagerDuty's official local MCP (Model Context Protocol) server which provides tools to interact with your PagerDuty account directly from your MCP-enabled client.
futu-stock
mcp server for futuniuniu stock
mcp -boilerplate
Boilerplate using one of the 'better' ways to build MCP Servers. Written using FastMCP