ultralytics_mcp_server
MCP server from MetehanYasar11/ultralytics_mcp_server
claude mcp add --transport stdio metehanyasar11-ultralytics_mcp_server node src/server.js \ --env MCP_PORT="8092" \ --env STREAMLIT_PORT="8501" \ --env TENSORBOARD_PORT="6006" \ --env CUDA_VISIBLE_DEVICES="<your_cuda_devices>"
How to use
This MCP server integrates Ultralytics YOLO-based models with N8N workflow automation. It exposes a dedicated MCP endpoint on port 8092 that you can connect to from N8N via the SSE transport. The server provides seven tools for AI workflows: detect_objects for real-time object detection in images, train_model for custom YOLO model training, evaluate_model to assess model performance, predict_batch for batch processing of multiple images, export_model to convert models into formats like ONNX or TensorRT, benchmark_model to measure performance, and analyze_dataset to compute dataset statistics and validation metrics. Use the MCP endpoint to trigger these tools from your automation flows, passing input data and receiving structured results suitable for downstream tasks in your N8N workflows.
How to install
Prerequisites:
- Docker and Docker Compose (recommended for quick setup)
- Optional: NVIDIA Docker runtime if you plan to use GPU acceleration
- Node.js (for local run) if you prefer running the MCP server without Docker
- Docker-based installation
- Ensure Docker and Docker Compose are installed on your system.
- Clone or download the repository containing ultralytics_mcp_server.
- Navigate to the project root and start the services:
docker-compose up --build -d
- The MCP server will be available at http://localhost:8092, Streamlit UI at http://localhost:8501, and TensorBoard at http://localhost:6006.
- Local Node.js (no Docker) installation
- Prerequisites: Node.js installed on your system.
- Install dependencies (if any) and run the MCP server:
cd ultralytics_mcp_server
node src/server.js
- Ensure the MCP_PORT environment variable is set to 8092 (default in README). Access the MCP endpoint at http://localhost:8092.
- Validation
- Verify the MCP health endpoint: http://localhost:8092/health
- Confirm Streamlit and TensorBoard endpoints are reachable if using the Docker setup.
Additional notes
Environment variables:
- MCP_PORT controls the MCP server listening port (default 8092).
- STREAMLIT_PORT and TENSORBOARD_PORT may be configured to match your local setup.
- CUDA_VISIBLE_DEVICES can be used to pin GPUs if using CUDA-enabled containers.
Common issues:
- If the MCP health check fails, ensure the MCP server process is running and the port is not blocked by a firewall.
- For GPU-enabled training/inference, make sure NVIDIA Docker runtime is installed and the host has compatible drivers.
- When using Docker, use docker-compose logs to diagnose startup problems: docker-compose logs ultralytics-container and docker-compose logs mcp-connector-container.
Configuration tips:
- In docker-compose.yml, adjust port mappings and environment variables to fit your network and hardware setup.
- If you modify the server code, rebuild containers with docker-compose up --build -d.
Related MCP Servers
zen
Selfhosted notes app. Single golang binary, notes stored as markdown within SQLite, full-text search, very low resource usage
MCP -Deepseek_R1
A Model Context Protocol (MCP) server implementation connecting Claude Desktop with DeepSeek's language models (R1/V3)
mcp-fhir
A Model Context Protocol implementation for FHIR
mcp
Inkdrop Model Context Protocol Server
mcp-appium-gestures
This is a Model Context Protocol (MCP) server providing resources and tools for Appium mobile gestures using Actions API..
dubco -npm
The (Unofficial) dubco-mcp-server enables AI assistants to manage Dub.co short links via the Model Context Protocol. It provides three MCP tools: create_link for generating new short URLs, update_link for modifying existing links, and delete_link for removing short links.