Z.ai2api
将 Z.ai Chat 代理为 OpenAI/Anthropic Compatible 格式,支持多模型列表映射、免令牌、智能处理思考链、图片上传等功能;Z.ai ZtoApi z2api ZaitoApi zai X-Signature 签名 GLM 4.5 v 4.6
claude mcp add --transport stdio hmjz100-z.ai2api python app.py \ --env BASE="https://chat.z.ai (upstream API base URL; default: https://chat.z.ai)" \ --env PORT="8080 (service port; default: 8080)" \ --env MODEL="GLM-4.5 (fallback model when none is specified; default: GLM-4.5)" \ --env TOKEN="access token (optional; required if ANONYMOUS_MODE is false)" \ --env DEBUG_MODE="true or false (show debug information; default: false)" \ --env ANONYMOUS_MODE="true or false (guest mode; default: true; in guest mode file/image upload is not supported)" \ --env THINK_TAGS_MODE="reasoning|think|strip|details (default: reasoning)"
How to use
Z.ai2api functions as an OpenAI-compatible proxy for Z.ai. It supports optional tokenless operation, intelligent handling of thought chains, and image uploads once logged in. The server auto-detects models by querying /api/models and can select appropriate model names for compatibility. Guests can use the service without tokens when ANONYMOUS_MODE is enabled, though image upload is restricted in guest mode. The THINK_TAGS_MODE option formats the assistant's reasoning in different styles to suit your preference for how the chain-of-thought is presented. To run, configure the .env variables as needed, then start the Python server to expose a port (default 8080) for client requests that expect an OpenAI-compatible API.
How to install
Prerequisites:
- Python 3.12+ installed
- Git installed
- Basic familiarity with environment configuration
Installation steps:
-
Clone the repository git clone https://github.com/hmjz100/Z.ai2api.git cd Z.ai2api
-
(Optional) Create and activate a virtual environment python -m venv venv source venv/bin/activate # Unix/macOS venv\Scripts\activate # Windows
-
Install dependencies pip install -r requirements.txt
-
Prepare environment configuration
- Create a .env file or rely on the mcp_config env mapping
- Required/optional vars include BASE, PORT, MODEL, TOKEN, ANONYMOUS_MODE, THINK_TAGS_MODE, DEBUG_MODE
-
Run the server python app.py
-
Verify accessibility
- Access http://localhost:8080 (or your configured PORT) and test the OpenAI-compatible endpoints.
Additional notes
Tips:
- If you plan to operate in anonymous mode, ensure ANONYMOUS_MODE=true; note that file/image uploads are not available in guest mode.
- The THINK_TAGS_MODE controls how the reasoning/chain-of-thought is formatted; choose based on your preference for readability or compactness.
- BASE should point to your upstream Z.ai base API. Adjust PORT if you need to run behind a reverse proxy or on a non-default port.
- If you encounter authentication or token errors, verify TOKEN existence and the ANONYMOUS_MODE setting.
- When upgrading to Canary or newer branches, verify compatibility of model endpoints and any breaking changes in /api/models.
- For production deployments, consider securing the API with proper token management and restricting CORS as needed.
Related MCP Servers
mcp-telegram
MCP Server for Telegram
claude-code
MCP Server connects with claude code local command.
omega-memory
Persistent memory for AI coding agents
mcp-chat-studio
A powerful MCP testing tool with multi-provider LLM support (Ollama, OpenAI, Claude, Gemini). Test, debug, and develop MCP servers with a modern UI.
cursor-feedback-extension
Save your Cursor monthly quota! Unlimited AI interactions in one conversation via MCP feedback loop.
obsidian
MCP server for git-backed Obsidian vaults. Access and manage notes through Claude, ChatGPT, and other LLMs with automatic git sync. Supports local (stdio/HTTP) and remote (AWS Lambda) deployment.