Get the FREE Ultimate OpenClaw Setup Guide →

open-webui

User-friendly AI Interface (Supports Ollama, OpenAI API, ...)

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio open-webui-open-webui docker run -i openwebui/open-webui:latest \
  --env OPEN_WEBUI_HOST="http://localhost:8080" \
  --env OPEN_WEBUI_TOKEN="<your-token>"

How to use

Open WebUI is a self-hosted AI platform designed to operate offline and locally. It offers extensible plugins, multiple LLM runners (including Ollama and OpenAI-compatible APIs), built-in RAG functionality, and a web UI with features like an in-browser code editor, Markdown/LaTeX support, and a rich ecosystem of integrations. After starting the container, you can access the web interface and choose your preferred model sources, configure OpenAI-compatible endpoints, and connect to vector databases for local RAG. The platform supports pipelines and plugins, enabling you to extend functionality with custom Python code, model adapters, and external tools. You can also enable RBAC, SSO, and enterprise authentication for larger teams. Use the UI to manage models, data, and conversations, and leverage the WebUI’s web search, image generation/editing, and streaming capabilities during chats.

How to install

Prerequisites:

  • Docker installed on your host (Docker Engine 19.03+)
  • Optional: Kubernetes cluster if you prefer kubernetes deployment

Installation steps (Docker):

  1. Ensure Docker is running on your machine.
  2. Pull and run the Open WebUI image: docker run -d --name open-webui -p 8080:8080 openwebui/open-webui:latest
  3. Wait for the container to initialize. Access the UI at http://localhost:8080
  4. (Optional) Set environment variables or mount volumes for persistent storage and configuration, e.g.: docker run -d
    --name open-webui
    -p 8080:8080
    -e OPEN_WEBUI_HOST=http://localhost:8080
    -e OPEN_WEBUI_TOKEN=<your-token>
    -v /path/to/data:/data
    openwebui/open-webui:latest
  5. For Kubernetes, follow the official docs to deploy using a Deployment/Service and config maps for environment variables.

If you prefer to use npm, Python, or a direct binary, refer to the project docs for alternative install methods, but the Docker path is the most straightforward for Open WebUI.

Additional notes

Notes and tips:

  • The image supports Ollama and OpenAI-compatible APIs; configure your preferred LLM runners within the UI or via environment/config files.
  • For persistent storage, mount a volume to /data or the appropriate path used by the container.
  • If you enable RBAC or enterprise authentication, configure LDAP/SSO providers as needed.
  • When using RAG, you can select from multiple vector databases (ChromaDB, PGVector, Qdrant, Milvus, etc.).
  • If you encounter port conflicts, adjust the host port in the docker run command.
  • Refer to the Open WebUI docs for plugin and Pipelines framework usage to extend functionality with custom Python functions and integrations.

Related MCP Servers

Sponsor this space

Reach thousands of developers