comfy
A server using FastMCP framework to generate images based on prompts via a remote Comfy server.
claude mcp add --transport stdio lalanikarim-comfy-mcp-server uvx comfy-mcp-server \ --env COMFY_URL="http://your-comfy-server-url:port" \ --env PROMPT_LLM="model-name" \ --env OUTPUT_MODE="file" \ --env OUTPUT_NODE_ID="9" \ --env PROMPT_NODE_ID="6" \ --env OLLAMA_API_BASE="http://localhost:11434" \ --env COMFY_WORKFLOW_JSON_FILE="/path/to/the/comfyui_workflow_export.json"
How to use
The Comfy MCP Server leverages the FastMCP framework to generate images by submitting prompts to a remote Comfy server and retrieving the resulting images. It uses a configured Comfy workflow (exported from Comfy UI) to structure prompts and outputs. When you run the server, you can supply a topic or prompt, and the server will build a prompt via a language model (via LangChain and optionally Ollama) and then submit it to the remote Comfy instance for image generation. The workflow's text prompt node and output node IDs are configurable, enabling you to direct the input and extract the final image. If you have an Ollama server available, you can enable local prompt generation by pointing to the Ollama API and a chosen model name. The provided example configuration demonstrates launching the server via uvx, including environmental variables for COMFY_URL, workflow JSON, and node IDs, with OUTPUT_MODE set to file to store the generated image locally.
How to install
Prerequisites:
- Python environment with uv/uvx (Python project manager) installed
- A Comfy UI workflow export JSON file ready on disk
- Access to a remote Comfy server
Installation steps:
-
Install uvx and dependencies for the MCP server: uvx mcp[cli]
-
Prepare environment variables for the server:
- Obtain the Comfy server URL and workflow JSON export path
- Determine the node IDs for the text prompt (PROMPT_NODE_ID) and final image output (OUTPUT_NODE_ID)
- Decide on OUTPUT_MODE (e.g., file or url)
-
Run the server using uvx in development or production mode as needed:
- uvx comfy-mcp-server
-
(Optional) If using Ollama for prompt generation, ensure the Ollama server is running and configure:
- OLLAMA_API_BASE to the Ollama API URL
- PROMPT_LLM to the desired model name hosted on Ollama
-
Verify the server starts and listens for requests, then test by sending prompts according to your workflow.
Additional notes
Tips and common points:
- Ensure COMFY_URL points to a reachable Comfy server and the workflow JSON is valid for that server.
- The PROMPT_NODE_ID and OUTPUT_NODE_ID must correspond to the actual nodes in your exported workflow; incorrect IDs will cause failures in prompt construction or image retrieval.
- If OUTPUT_MODE is set to file, the server will save the generated image to disk; if set to url, it will return a URL to the generated image.
- If you enable Ollama integration, ensure the API base URL (OLLAMA_API_BASE) is reachable and the PROMPT_LLM model name matches a valid model on your Ollama server.
- Keep your dependencies up to date and monitor the MCP server logs for troubleshooting image generation or workflow errors.
- The default example uses Python with uvx; if you switch to another environment, adjust the mcp_config command and environment accordingly.
Related MCP Servers
comfyui
lightweight Python-based MCP (Model Context Protocol) server for local ComfyUI
nautex
MCP server for guiding Coding Agents via end-to-end requirements to implementation plan pipeline
mcp-yfinance
Real-time stock API with Python, MCP server example, yfinance stock analysis dashboard
cloudwatch-logs
MCP server from serkanh/cloudwatch-logs-mcp
servicenow-api
ServiceNow MCP Server and API Wrapper
the -company
TheMCPCompany: Creating General-purpose Agents with Task-specific Tools