Get the FREE Ultimate OpenClaw Setup Guide →

ultralytics_mcp_server

MCP server from MetehanYasar11/ultralytics_mcp_server

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio metehanyasar11-ultralytics_mcp_server node src/server.js \
  --env MCP_PORT="8092" \
  --env STREAMLIT_PORT="8501" \
  --env TENSORBOARD_PORT="6006" \
  --env CUDA_VISIBLE_DEVICES="<your_cuda_devices>"

How to use

This MCP server integrates Ultralytics YOLO-based models with N8N workflow automation. It exposes a dedicated MCP endpoint on port 8092 that you can connect to from N8N via the SSE transport. The server provides seven tools for AI workflows: detect_objects for real-time object detection in images, train_model for custom YOLO model training, evaluate_model to assess model performance, predict_batch for batch processing of multiple images, export_model to convert models into formats like ONNX or TensorRT, benchmark_model to measure performance, and analyze_dataset to compute dataset statistics and validation metrics. Use the MCP endpoint to trigger these tools from your automation flows, passing input data and receiving structured results suitable for downstream tasks in your N8N workflows.

How to install

Prerequisites:

  • Docker and Docker Compose (recommended for quick setup)
  • Optional: NVIDIA Docker runtime if you plan to use GPU acceleration
  • Node.js (for local run) if you prefer running the MCP server without Docker
  1. Docker-based installation
  • Ensure Docker and Docker Compose are installed on your system.
  • Clone or download the repository containing ultralytics_mcp_server.
  • Navigate to the project root and start the services:
docker-compose up --build -d
  1. Local Node.js (no Docker) installation
  • Prerequisites: Node.js installed on your system.
  • Install dependencies (if any) and run the MCP server:
cd ultralytics_mcp_server
node src/server.js
  • Ensure the MCP_PORT environment variable is set to 8092 (default in README). Access the MCP endpoint at http://localhost:8092.
  1. Validation
  • Verify the MCP health endpoint: http://localhost:8092/health
  • Confirm Streamlit and TensorBoard endpoints are reachable if using the Docker setup.

Additional notes

Environment variables:

  • MCP_PORT controls the MCP server listening port (default 8092).
  • STREAMLIT_PORT and TENSORBOARD_PORT may be configured to match your local setup.
  • CUDA_VISIBLE_DEVICES can be used to pin GPUs if using CUDA-enabled containers.

Common issues:

  • If the MCP health check fails, ensure the MCP server process is running and the port is not blocked by a firewall.
  • For GPU-enabled training/inference, make sure NVIDIA Docker runtime is installed and the host has compatible drivers.
  • When using Docker, use docker-compose logs to diagnose startup problems: docker-compose logs ultralytics-container and docker-compose logs mcp-connector-container.

Configuration tips:

  • In docker-compose.yml, adjust port mappings and environment variables to fit your network and hardware setup.
  • If you modify the server code, rebuild containers with docker-compose up --build -d.

Related MCP Servers

Sponsor this space

Reach thousands of developers