yuque
语雀 Model Context Protocol (MCP) 代理服务器
claude mcp add --transport stdio suonian-yuque-mcp-server python -m uvicorn app_async:app --host 0.0.0.0 --port 3000 \ --env YUQUE_TOKEN="your-token-here"
How to use
This MCP server acts as a bridge between MCP clients and the Yuque platform. It implements a wide range of MCP-supported features such as knowledge base management, document handling, search capabilities, user and team management, and caching with Redis or in-memory fallback. It can operate in synchronous or asynchronous mode (via FastAPI/httpx in async mode) and exposes endpoints compatible with MCP clients like Chatbox, Claude Desktop, Cherry Studio, and Cursor. To get started, configure your Yuque access token either via environment variable (YUQUE_TOKEN) or via a configuration file or HTTP headers as described in the project documentation. Once running, you can validate the service health at /health and begin issuing MCP commands such as list_repos, get_repo, create_doc, search_docs, and list_user_info through the MCP client of your choice. The server also supports token-based HTTP header authentication for secure access.
How to install
Prerequisites:
- Python 3.7 or newer
- git
- Optional: Redis server if you want cached storage
Installation steps:
- Clone the repository
git clone https://github.com/suonian/yuque-mcp-server.git
cd yuque-mcp-server
- (Optional) Create a virtual environment and install dependencies
python3 -m venv venv
source venv/bin/activate # on macOS/Linux
# Windows: venv\Scripts\activate
pip install -r requirements.txt
- Set up token configuration (one of several supported methods):
- Environment variable
export YUQUE_TOKEN=your-token-here
- Configuration file (yuque-config.env) by following README guidance
# yuque-config.env
YUQUE_TOKEN=your-token-here
PORT=3000
- Or via HTTP headers in client configuration (X-Yuque-Token).
- Run the server (asynchronous FastAPI/uvicorn variant):
python -m uvicorn app_async:app --host 0.0.0.0 --port 3000
-
Optional: Run via Docker (as shown in the project Quick Start) for containerized deployment.
-
Validate:
curl http://localhost:3000/health
Additional notes
Tips and considerations:
- Token priority: HTTP Header (X-Yuque-Token) > environment variable (YUQUE_TOKEN) > configuration file. Ensure tokens are kept secret and not committed to version control.
- If Redis is unavailable, the server will automatically fall back to in-memory caching to maintain responsiveness.
- The service supports both synchronous (Flask-style) and asynchronous (FastAPI/uvicorn) operation modes; choose based on your workload needs.
- When deploying behind a reverse proxy, ensure that the proxy allows the configured port (default 3000) and forwards headers correctly for authentication.
- For systemd/macos launchd or Windows Service setups, refer to the project documentation for service scripts and health checks.
- If you encounter issues, check the logs for authentication errors, Redis connectivity, and token validity; health endpoints and logs provide guidance on misconfigurations.
Related MCP Servers
mcp-neo4j
Neo4j Labs Model Context Protocol servers
mcp-playground
A Streamlit-based chat app for LLMs with plug-and-play tool support via Model Context Protocol (MCP), powered by LangChain, LangGraph, and Docker.
mcp-searxng-enhanced
Enhanced MCP server for SearXNG: category-aware web-search, web-scraping, and date/time retrieval.
Kilntainers
MCP server to give every agent an ephemeral Linux sandboxes for executing shell commands.
Python-Runtime-Interpreter
PRIMS is a lightweight, open-source Model Context Protocol (MCP) server that lets LLM agents safely execute arbitrary Python code in a secure, throw-away sandbox.
mcp-ssh-orchestrator
Secure SSH access for AI agents via MCP. Execute commands across your server fleet with policy enforcement, network controls, and comprehensive audit logging.