ChattyPlay-Agent
本项目基于React+TypeScript+Hono实现,实现Google、Github授权登录,已接入OpenAI SDK、MCP服务和Agent相关大模型,扩展实时黄金和K线图,获取Hugging Face论文,以及文生图服务(无需再代理和APIKey),同时支持腾讯视频、爱奇艺、优酷、芒果TV、哔哩哔哩、网易云音乐等平台会员视频破解可在线解析、动漫漫画畅享阅读和论文降重(适配PC端、移动端)
claude mcp add --transport stdio p1kaj1uu-chattyplay-agent docker run -i p1kaj1uu/chattyplay-agent:latest \ --env DEBUG="false" \ --env REDIS_URL="redis://localhost:6379/0" \ --env ENVIRONMENT="production" \ --env DATABASE_URL="postgres://user:pass@host:5432/dbname" \ --env OPENAI_API_KEY="your-api-key-if-needed"
How to use
ChattyPlay-Agent is a multifaceted backend service suite built with FastAPI that aggregates and exposes a collection of tools for media parsing, real-time financial data, and AI-assisted interactions. It includes modules for music playback and artist/track search, video parsing interfaces with multiple backends, real-time gold price feeds and TradingView-style charts, academic paper access via Hugging Face, and an integrated ChatGPT experience with streaming, context-aware conversations, and voice capabilities. The server is designed to be deployed in containers and interfaced through a REST API; it can be extended with additional data sources and AI models as needed. To start using it, deploy the containerized server, ensure environment variables such as API keys and data stores are configured, and then query the exposed endpoints to access the respective services (music, video parsing, gold data, papers, and ChatGPT functions).
Key capabilities include:
- Music playback with fuzzy search for songs/artists and playlist controls.
- Video解析 (parsing) with multiple available interfaces for fast, member-free extraction.
- Real-time 黄金 (gold) data feeds and K-line charts with date range controls.
- Hugging Face scholarly paper access, arXiv linking, and related metadata.
- ChatGPT integration with streaming outputs, multi-turn conversations, and session history storage, plus voice chat and speech read-aloud features.
- 文生图 (image generation) integration and 漫画/动漫 content tooling.
To interact with the server, use the REST API endpoints for each feature. For example, use the music endpoints to search and play tracks, use the video parsing endpoints to obtain parsable video data, and use the ChatGPT endpoints to engage in streaming conversations with optional context persistence.
How to install
Prerequisites:
- Docker and Docker Compose installed on your host
- Optional: Python 3.10+ if you prefer a non-Docker deployment
- Access to required API keys (OpenAI, etc.) and data stores (PostgreSQL/Redis) as needed
Option A: Deploy with Docker (recommended)
-
Pull the latest image (or build locally if you have a Dockerfile): docker pull p1kaj1uu/chattyplay-agent:latest
-
Run the container with necessary environment variables: docker run -d
--name chattyplay-agent
-e ENVIRONMENT=production
-e DEBUG=false
-e DATABASE_URL=postgres://user:pass@host:5432/dbname
-e OPENAI_API_KEY=your-api-key
-e REDIS_URL=redis://localhost:6379/0
p1kaj1uu/chattyplay-agent:latest -
If you use docker-compose, you can define a docker-compose.yml like:
version: '3.8'
services:
chattyplay-agent:
image: p1kaj1uu/chattyplay-agent:latest
environment:
- ENVIRONMENT=production
- DEBUG=false
- DATABASE_URL=postgres://user:pass@host:5432/dbname
- OPENAI_API_KEY=your-api-key
- REDIS_URL=redis://redis:6379/0
ports:
- "8000:80" # adjust as needed
- Bring the stack up: docker-compose up -d
Option B: Local Python (uvx) deployment (if you prefer Python routing directly)
- Create a virtual environment and install dependencies (adjust package name if needed):
python3.10 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
- Run the UVicorn server (adjust module/path as needed):
uvicorn main:app --host 0.0.0.0 --port 8000
- Configure environment variables as needed (DATABASE_URL, OPENAI_API_KEY, etc.).
Notes:
- Replace placeholder values with real credentials and URLs.
- If using a reverse proxy (nginx), configure it to route to the container or uvicorn port accordingly.
Additional notes
Tips and common issues:
- Ensure OpenAI API keys and any other external service keys are provided via environment variables before starting the server.
- If you see connection errors to Redis or the database, verify network access and that the services are up and listening on the configured URLs.
- For streaming ChatGPT responses, confirm that the server has sufficient CPU/memory and that the client supports chunked transfer or SSE as implemented by the API.
- When updating the deployment, rebuild images and restart containers to pick up changes in code or configuration.
- If using the docker image, ensure you are pulling from a trusted registry and that image tags (latest, specific version) match your stability requirements.
- Review rate limits and per-endpoint quotas to prevent abuse; adjust config to match your usage pattern.
Related MCP Servers
AstrBot
Agentic IM Chatbot infrastructure that integrates lots of IM platforms, LLMs, plugins and AI feature, and can be your openclaw alternative. ✨
casibase
⚡️AI Cloud OS: Open-source enterprise-level AI knowledge base and MCP (model-context-protocol)/A2A (agent-to-agent) management platform with admin UI, user management and Single-Sign-On⚡️, supports ChatGPT, Claude, Llama, Ollama, HuggingFace, etc., chat bot demo: https://ai.casibase.com, admin UI demo: https://ai-admin.casibase.com
metorial
Connect any AI model to 600+ integrations; powered by MCP 📡 🚀
shippie
extendable code review and QA agent 🚢
paperdebugger
A Plugin-Based Multi-Agent System for In-Editor Academic Writing, Review, and Editing
minima
On-premises conversational RAG with configurable containers