Get the FREE Ultimate OpenClaw Setup Guide →

mcp -glm-vision

A Model Context Protocol (MCP) server that integrates GLM-4.5V from Z.AI with Claude Code.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio danilofalcao-mcp-server-glm-vision python glm-vision.py \
  --env GLM_MODEL="glm-4.5v" \
  --env GLM_API_KEY="<your_glm_api_key>" \
  --env GLM_API_BASE="https://api.z.ai/api/paas/v4"

How to use

This MCP server provides an image analysis capability powered by GLM-4.5V from Z.AI, integrated with Claude Code. The core tool, glm-vision, accepts an image (local file path or URL) and a prompt to describe or analyze the image. It is designed to run within an MCP environment and relies on your GLM API key and model configuration provided via environment variables. Typical usage involves starting the server with Python and then invoking the glm-vision functionality through MCP-enabled clients or Claude Code workflows. The tool supports optional parameters like temperature, thinking mode, and max_tokens to tailor the response length and reasoning.

To use it, ensure your environment variables GLM_API_KEY, GLM_API_BASE, and GLM_MODEL are set (either via a .env file or directly in the MCP config). The available tool is glm-vision, which processes an image_path and a prompt to generate a descriptive or analytical response about the image. You can test locally by running glm-vision.py, and then interact with it through your MCP client or Claude Code integration to perform image analysis tasks such as identifying objects, describing scenes, or answering specific questions about the image content.

How to install

Prerequisites:

  • Python 3.10 or higher
  • GLM API key from Z.AI
  • Claude Code installed (for integration with Claude Code if desired)

Setup steps:

  1. Clone or create the project directory:
cd /path/to/your/project
  1. Create and activate a virtual environment:
python3 -m venv env
source env/bin/activate  # On Windows: env\Scripts\activate
  1. Install dependencies:
pip install -r requirements.txt
# or with uv (recommended)
uv pip install -r requirements.txt
  1. Set up environment variables:
cp .env.example .env
# Edit .env with your GLM API key from Z.AI
  1. Add the server to Claude Code (example using uv):
# Using uv (recommended)
uv run mcp install -e . --name "GLM Vision Server"

# Or manually add to Claude Desktop configuration:
claude mcp add-json --scope user glm-vision '{
  "type": "stdio",
  "command": "/path/to/your/project/env/bin/python",
  "args": ["/path/to/your/project/glm-vision.py"],
  "env": {"GLM_API_KEY": "your_api_key_here"}
}'
  1. Run the server locally to verify:
# With uv (recommended)
uv run python glm-vision.py

# Or directly with python
python glm-vision.py

Additional notes

Notes and tips:

  • Environment variables: GLM_API_KEY is required. GLM_API_BASE defaults to https://api.z.ai/api/paas/v4 and GLM_MODEL defaults to glm-4.5v; adjust as needed for your setup.
  • If you encounter API key issues, double-check that the key is valid and has the necessary permissions for GLM-4.5V access.
  • The glm-vision tool supports local image files and URLs. Ensure the image_path is accessible from the environment where the server runs.
  • If you’re using Claude Code integration, you can add the server via the provided JSON snippet or by wiring the stdio configuration as shown.
  • For debugging, run glm-vision.py directly to isolate issues before integrating with MCP.
  • When using uv, you may prefer the uvx workflow if you are using a Python package distribution approach, but in this setup, running the script directly is common.
  • If you need to adjust model parameters (temperature, thinking, max_tokens), pass them via the appropriate prompt mechanism or modify the glm-vision.py code to expose them as MCP-enabled parameters.

Related MCP Servers

Sponsor this space

Reach thousands of developers