AgentChat
AgentChat 是一个基于 LLM 的智能体交流平台,内置默认 Agent 并支持用户自定义 Agent。通过多轮对话和任务协作,Agent 可以理解并协助完成复杂任务。项目集成 LangChain、Function Call、MCP 协议、RAG、Memory、Milvus 和 ElasticSearch 等技术,实现高效的知识检索与工具调用,使用 FastAPI 构建高性能后端服务。
claude mcp add --transport stdio shy2593666979-agentchat python -m agentchat \ --env APP_ENV="development or production" \ --env REDIS_URL="redis://localhost:6379/0" \ --env DATABASE_URL="postgresql://user:password@host:port/dbname" \ --env OPENAI_API_KEY="your-openai-api-key (if using OpenAI models)"
How to use
AgentChat is a modern intelligent dialogue system that orchestrates multiple AI models, tools, and knowledge sources. It integrates MCP servers to enable Model Context Protocol-based services, allowing you to upload and manage custom MCP services alongside built-in capabilities such as knowledge retrieval, tool invocation, and multi-agent workflows. Once the server is running, you can use the MCP-enabled tools to perform tasks like querying knowledge bases, calling external tools (e.g., weather, document parsing, search), and coordinating multiple agents to solve complex tasks. The frontend provides an intuitive dashboard for managing agents, tools, and workflows, while the backend handles streaming responses and asynchronous task execution for real-time interactions.
How to install
Prerequisites:
- Python 3.12+ installed on your system
- Git installed
- Optional: Docker if you prefer containerized deployment
- Clone the repository
- git clone https://github.com/shy2593666979/agentchat.git
- cd agentchat
- Create a virtual environment (recommended)
- python -m venv venv
- source venv/bin/activate # on macOS/Linux
- .\venv\Scripts\activate # on Windows
- Install dependencies
- pip install -U pip setuptools wheel
- pip install -r requirements.txt
- Configure environment variables
- Copy .env.example to .env
- Edit .env with appropriate values (DATABASE_URL, REDIS_URL, OPENAI_API_KEY, etc.)
- Run the server
- python -m agentchat
- Optional: run with Docker
- Ensure Docker is running
- Build: docker build -t agentchat:latest .
- Run: docker run -i --env-file .env -p 8000:8000 agentchat:latest
- Verify
- Access http://localhost:8000 (or the configured port) and follow the documentation for endpoints and MCP configuration.
Additional notes
Tips and common issues:
- Ensure your OPENAI_API_KEY (or other model keys) are set if you rely on external AI services.
- When using MCP, make sure the MCP endpoints you upload are compatible with the server's expected protocol version. Check migration notes if upgrading from older versions.
- For multi-agent workflows, define clear goals and constraints to avoid miscoordination.
- If you encounter startup errors related to missing dependencies, re-create the virtual environment and reinstall requirements.
- Use the environment variable APP_ENV to toggle between development and production modes; enable logging and monitoring in production for better observability.
Related MCP Servers
PPTAgent
An Agentic Framework for Reflective PowerPoint Generation
open-ptc-agent
An open source implementation of code execution with MCP (Programatic Tool Calling)
mcp-use-ts
mcp-use is the framework for MCP with the best DX - Build AI agents, create MCP servers with UI widgets, and debug with built-in inspector. Includes client SDK, server SDK, React hooks, and powerful dev tools.
python -client
支持查询主流agent框架技术文档的MCP server(支持stdio和sse两种传输协议), 支持 langchain、llama-index、autogen、agno、openai-agents-sdk、mcp-doc、camel-ai 和 crew-ai
mcp-in-action
极客时间MCP新课已经上线!超2000同学一起开启MCP学习之旅!
modelscope
ModelScope's official MCP Server (in active development).