SyberAgent
下一代PC本地自动化多智能体框架
claude mcp add --transport stdio cyberzhang1-syberagent python main.py \ --env MODEL="deepseek-chat" \ --env API_KEY="your_api_key_here" \ --env BASE_URL="https://api.deepseek.com/v1" \ --env LOG_LEVEL="INFO" \ --env MAX_FILE_SIZE="100MB" \ --env TAVILY_API_KEY="your_tavily_api_key_here"
How to use
SyberAgent 2.0 is a versatile, multi-agent AI system designed to coordinate specialized agents for document processing, image analysis, audio transcription, search, data analysis, and natural language interaction. It leverages LangGraph for orchestration and FastAPI with WebSocket support to enable real-time communication between clients and agents. After starting the server, you can interact with it through a chat-style interface to request complex tasks like analyzing a PDF, extracting tables, performing OCR on images, or transcribing audio. The system distributes tasks across dedicated agents (e.g., FileProcessingAgent, ImageProcessingAgent, AudioProcessingAgent, WebSearchAgent, DataAnalysisAgent, ConversationalAgent) to optimize performance and maintain modularity.
How to install
Prerequisites:
- Python 3.8+
- pip
- Internet access to install dependencies
- Clone the repository
git clone https://github.com/cyberzhang1/SyberAgent.git
cd SyberAgent
- Create and activate a virtual environment
python -m venv .venv
# Windows
.venv\Scripts\activate
# Linux/macOS
source .venv/bin/activate
- Install dependencies
pip install -r requirements.txt
- Configure environment variables
# Copy example env file if available, or create your own
cp .env.example .env
Edit .env to set API keys and config:
API_KEY=your_api_key_here
TAVILY_API_KEY=your_tavily_key_here
BASE_URL=https://api.deepseek.com/v1
MODEL=deepseek-chat
5. Run the server
```bash
python main.py
- Optional: run with development settings or tests as needed
Additional notes
Tips and notes:
- Ensure API keys are kept secret and not committed to version control.
- The system supports multiple AI models; configure BASE_URL and MODEL to target your preferred backend.
- For large documents or media, consider increasing MAX_FILE_SIZE and adjusting OCR/processing engine options in core/config.py.
- If you encounter performance issues, enable asynchronous handling and verify that the runtime has sufficient CPU/GPU resources.
- Use the provided examples in the README as templates for interacting with different agents (document processing, image analysis, audio transcription).
Related MCP Servers
FastGPT
FastGPT is a knowledge-based platform built on the LLMs, offers a comprehensive suite of out-of-the-box capabilities such as data processing, RAG retrieval, and visual AI workflow orchestration, letting you easily develop and deploy complex question-answering systems without the need for extensive setup or configuration.
AstrBot
Agentic IM Chatbot infrastructure that integrates lots of IM platforms, LLMs, plugins and AI feature, and can be your openclaw alternative. ✨
Everywhere
Context-aware AI assistant for your desktop. Ready to respond intelligently, seamlessly integrating multiple LLMs and MCP tools.
python -client
支持查询主流agent框架技术文档的MCP server(支持stdio和sse两种传输协议), 支持 langchain、llama-index、autogen、agno、openai-agents-sdk、mcp-doc、camel-ai 和 crew-ai
robot_MCP
A simple MCP server for the SO-ARM100 control
time
⏰ Time MCP Server: Giving LLMs Time Awareness Capabilities