harbor
One command brings a complete pre-wired LLM stack with hundreds of services to explore.
claude mcp add --transport stdio av-harbor npx -y @avcodes/harbor \ --env HARBOUR_LOG="optional log level (e.g., info, debug)"
How to use
Harbor is a command-line interface (CLI) and companion app that lets you spin up a complete local LLM stack with minimal setup. It can orchestrate backends such as Ollama, llama.cpp, or vLLM, and frontends like Open WebUI. Extra services like SearXNG for web search, Speaches for voice chat, and ComfyUI for image generation can be included or omitted based on your needs. With a single harbor up command, Harbor provisions and connects all chosen services via Docker Compose, so you can start interacting with your models immediately. The tool is designed to remove manual wiring between components and provides a cohesive, plug-and-play local AI stack.
To use Harbor, install the Harbor CLI (via npm) and then run harbor up to start the configured services. You can customize which services are started (for example, enabling only the LLM backend and the web UI, or adding voice or image generation components). Once running, you can access the Open WebUI for web-based model interaction, or use the included tools for web search and voice chat depending on the services you enabled. Harbor handles inter-service networking and configuration so you don’t have to manually configure each component.
How to install
Prerequisites
- Docker and Docker Compose installed and running on your machine
- Git installed
- Node.js and npm (or a compatible environment to install the Harbor CLI)
Installation steps
- Install the Harbor CLI from npm (global install recommended):
npm install -g @avcodes/harbor
- Verify installation:
harbor --version
- Optional: If you prefer using npx without a global install, you can run Harbor commands via npx when needed:
npx -y @avcodes/harbor up
- Start Harbor with a basic stack (default services):
harbor up
- If you want to include specific services (e.g., SearXNG for web search and Speaches for voice chat), consult the Harbor docs for service flags and compose options, then re-run harbor up with the desired configuration.
Prerequisite notes:
- Ensure Docker is running and your user has permission to manage Docker resources.
- If you are behind a proxy or have custom DNS, you may need to configure Docker networking accordingly.
Additional notes
Tips and common considerations:
- Harbor provides a single command to deploy multiple local AI services; use harbor migrate only when upgrading to a new Harbor version that changes directory structures (as documented in the migration guide).
- You can enable or disable individual services (web UI, LLM backends, web search, voice chat, image generation) via Harbor's CLI options or configuration files per your setup.
- Environment variables such as HARBOUR_LOG can be helpful for troubleshooting; set verbose logs during debugging.
- When using Docker, ensure you have enough CPU/RAM headroom for the LLM backends you enable (LLM workloads can be resource-intensive).
- If you upgrade Harbor, follow the migration guide to adapt your services directory structure as described in the v0.4.0 migration notes.
Related MCP Servers
mcpcan
MCPCAN is a centralized management platform for MCP services. It deploys each MCP service using a container deployment method. The platform supports container monitoring and MCP service token verification, solving security risks and enabling rapid deployment of MCP services. It uses SSE, STDIO, and STREAMABLEHTTP access protocols to deploy MCP。
yutu
A fully functional MCP server and CLI for YouTube
automagik-genie
🧞 Automagik Genie – bootstrap, update, and roll back AI agent workspaces with a single CLI + MCP toolkit.
mcpproxy-go
Supercharge AI Agents, Safely
magg
Magg: The MCP Aggregator
homebutler
🏠 Manage your homelab from chat. Single binary, zero dependencies.