Get the FREE Ultimate OpenClaw Setup Guide →

harbor

One command brings a complete pre-wired LLM stack with hundreds of services to explore.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio av-harbor npx -y @avcodes/harbor \
  --env HARBOUR_LOG="optional log level (e.g., info, debug)"

How to use

Harbor is a command-line interface (CLI) and companion app that lets you spin up a complete local LLM stack with minimal setup. It can orchestrate backends such as Ollama, llama.cpp, or vLLM, and frontends like Open WebUI. Extra services like SearXNG for web search, Speaches for voice chat, and ComfyUI for image generation can be included or omitted based on your needs. With a single harbor up command, Harbor provisions and connects all chosen services via Docker Compose, so you can start interacting with your models immediately. The tool is designed to remove manual wiring between components and provides a cohesive, plug-and-play local AI stack.

To use Harbor, install the Harbor CLI (via npm) and then run harbor up to start the configured services. You can customize which services are started (for example, enabling only the LLM backend and the web UI, or adding voice or image generation components). Once running, you can access the Open WebUI for web-based model interaction, or use the included tools for web search and voice chat depending on the services you enabled. Harbor handles inter-service networking and configuration so you don’t have to manually configure each component.

How to install

Prerequisites

  • Docker and Docker Compose installed and running on your machine
  • Git installed
  • Node.js and npm (or a compatible environment to install the Harbor CLI)

Installation steps

  1. Install the Harbor CLI from npm (global install recommended):
npm install -g @avcodes/harbor
  1. Verify installation:
harbor --version
  1. Optional: If you prefer using npx without a global install, you can run Harbor commands via npx when needed:
npx -y @avcodes/harbor up
  1. Start Harbor with a basic stack (default services):
harbor up
  1. If you want to include specific services (e.g., SearXNG for web search and Speaches for voice chat), consult the Harbor docs for service flags and compose options, then re-run harbor up with the desired configuration.

Prerequisite notes:

  • Ensure Docker is running and your user has permission to manage Docker resources.
  • If you are behind a proxy or have custom DNS, you may need to configure Docker networking accordingly.

Additional notes

Tips and common considerations:

  • Harbor provides a single command to deploy multiple local AI services; use harbor migrate only when upgrading to a new Harbor version that changes directory structures (as documented in the migration guide).
  • You can enable or disable individual services (web UI, LLM backends, web search, voice chat, image generation) via Harbor's CLI options or configuration files per your setup.
  • Environment variables such as HARBOUR_LOG can be helpful for troubleshooting; set verbose logs during debugging.
  • When using Docker, ensure you have enough CPU/RAM headroom for the LLM backends you enable (LLM workloads can be resource-intensive).
  • If you upgrade Harbor, follow the migration guide to adapt your services directory structure as described in the v0.4.0 migration notes.

Related MCP Servers

Sponsor this space

Reach thousands of developers