open-webui
User-friendly AI Interface (Supports Ollama, OpenAI API, ...)
claude mcp add --transport stdio open-webui-open-webui docker run -i openwebui/open-webui:latest \ --env OPEN_WEBUI_HOST="http://localhost:8080" \ --env OPEN_WEBUI_TOKEN="<your-token>"
How to use
Open WebUI is a self-hosted AI platform designed to operate offline and locally. It offers extensible plugins, multiple LLM runners (including Ollama and OpenAI-compatible APIs), built-in RAG functionality, and a web UI with features like an in-browser code editor, Markdown/LaTeX support, and a rich ecosystem of integrations. After starting the container, you can access the web interface and choose your preferred model sources, configure OpenAI-compatible endpoints, and connect to vector databases for local RAG. The platform supports pipelines and plugins, enabling you to extend functionality with custom Python code, model adapters, and external tools. You can also enable RBAC, SSO, and enterprise authentication for larger teams. Use the UI to manage models, data, and conversations, and leverage the WebUI’s web search, image generation/editing, and streaming capabilities during chats.
How to install
Prerequisites:
- Docker installed on your host (Docker Engine 19.03+)
- Optional: Kubernetes cluster if you prefer kubernetes deployment
Installation steps (Docker):
- Ensure Docker is running on your machine.
- Pull and run the Open WebUI image: docker run -d --name open-webui -p 8080:8080 openwebui/open-webui:latest
- Wait for the container to initialize. Access the UI at http://localhost:8080
- (Optional) Set environment variables or mount volumes for persistent storage and configuration, e.g.:
docker run -d
--name open-webui
-p 8080:8080
-e OPEN_WEBUI_HOST=http://localhost:8080
-e OPEN_WEBUI_TOKEN=<your-token>
-v /path/to/data:/data
openwebui/open-webui:latest - For Kubernetes, follow the official docs to deploy using a Deployment/Service and config maps for environment variables.
If you prefer to use npm, Python, or a direct binary, refer to the project docs for alternative install methods, but the Docker path is the most straightforward for Open WebUI.
Additional notes
Notes and tips:
- The image supports Ollama and OpenAI-compatible APIs; configure your preferred LLM runners within the UI or via environment/config files.
- For persistent storage, mount a volume to /data or the appropriate path used by the container.
- If you enable RBAC or enterprise authentication, configure LDAP/SSO providers as needed.
- When using RAG, you can select from multiple vector databases (ChromaDB, PGVector, Qdrant, Milvus, etc.).
- If you encounter port conflicts, adjust the host port in the docker run command.
- Refer to the Open WebUI docs for plugin and Pipelines framework usage to extend functionality with custom Python functions and integrations.
Related MCP Servers
mcp-agent
Build effective agents using Model Context Protocol and simple workflow patterns
SearChat
Search + Chat = SearChat(AI Chat with Search), Support OpenAI/Anthropic/VertexAI/Gemini, DeepResearch, SearXNG, Docker. AI对话式搜索引擎,支持DeepResearch, 支持OpenAI/Anthropic/VertexAI/Gemini接口、聚合搜索引擎SearXNG,支持Docker一键部署。
k8m
一款轻量级、跨平台的 Mini Kubernetes AI Dashboard,支持大模型+智能体+MCP(支持设置操作权限),集成多集群管理、智能分析、实时异常检测等功能,支持多架构并可单文件部署,助力高效集群管理与运维优化。
AutoDocs
We handle what engineers and IDEs won't: generating and maintaining technical documentation for your codebase, while also providing search with dependency-aware context to help your AI tools understand your codebase and its conventions.
offeryn
Build tools for LLMs in Rust using Model Context Protocol
mcp-chat-studio
A powerful MCP testing tool with multi-provider LLM support (Ollama, OpenAI, Claude, Gemini). Test, debug, and develop MCP servers with a modern UI.