UltraRAG
A Low-Code MCP Framework for Building Complex and Innovative RAG Pipelines
claude mcp add --transport stdio openbmb-ultrarag uvx ultrarag
How to use
UltraRAG is a lightweight RAG development framework built on the MCP (Model Context Protocol) architecture. It standardizes core components like the Retriever and Generation as independent MCP Servers and pairs with the MCP Client to orchestrate complex pipelines using YAML configurations. This setup enables low-code orchestration of conditional branches, loops, and other control structures to build end-to-end knowledge-grounded applications. You can leverage the UltraRAG UI and the built-in pipeline builder to construct, debug, and convert pipeline logic into interactive conversational interfaces with just a few clicks. The system is designed to work with modular, reusable atomic servers, so you can plug in new capabilities by registering Tools that participate in your workflow.
To use it, install the server via the recommended Python environment method (uv) or run it in Docker. Once running, configure your MCP Client with a YAML workflow that specifies the sequence of MCP Servers (e.g., Retriever, Knowledge Ingestion, Reasoning, and Response) and any conditional logic you require. The UI provides a visual editor for pipeline construction, live parameter tuning, and a way to export the resulting logic as a deployable Web UI if you want a demonstration interface for your pipeline.
How to install
Prerequisites:
- Python 3.8+ installed on your system
- Git installed
- Optional: Docker if you prefer container deployment
Option A: Local source code installation (recommended with uv)
- Install uv (Python environment manager):
pip install uv
- Install uv (one-line installer) if you don't have it yet:
curl -LsSf https://astral.sh/uv/install.sh | sh
- Clone the UltraRAG repository:
git clone https://github.com/OpenBMB/UltraRAG.git --depth 1
cd UltraRAG
- Install dependencies and install the UltraRAG package via uv (package name is assumed to be ultrarag):
uvx ultrarag
- Run the MCP server (using the uvx configuration defined in mcp_config):
# If your environment uses the provided mcp_config, use uvx to install/run accordingly
Option B: Docker container deployment
- Ensure Docker is installed and running.
- Run UltraRAG using a prebuilt Docker image (example placeholder commands):
docker run -it ultrarag:latest
- Provide any necessary environment variables via -e flags as required by your deployment (see additional_notes).
Notes:
- The exact package name and image tag may differ; refer to the official UltraRAG documentation for the precise package name and image repository if you encounter installation issues.
- If you’re using uv-based installation, you may need to use uv sync to create a virtual environment and synchronize dependencies as described in the project docs.
Additional notes
Tips and common issues:
- Environment variables: You may need to configure MCP Client endpoints, API keys, or knowledge base paths via env vars. Start with typical vars like MCP_CLIENT_URL, KNOWLEDGE_BASE_PATH, and LOG_LEVEL, then adjust as needed.
- If you upgrade UltraRAG, ensure your YAML pipelines remain compatible with the new MCP Server versions. Small changes in operator behavior may require updates to control structures.
- For Docker deployments, bind-mount knowledge bases or data directories to preserve data across restarts and upgrades.
- When using uv, you can leverage uv sync to automatically install dependencies and create virtual environments for faster setup.
- If you run into module import errors, verify that you’re executing inside the correct uv-created environment and that the UltraRAG package is correctly installed.
Related MCP Servers
AstrBot
Agentic IM Chatbot infrastructure that integrates lots of IM platforms, LLMs, plugins and AI feature, and can be your openclaw alternative. ✨
magic
Super Magic. The first open-source all-in-one AI productivity platform (Generalist AI Agent + Workflow Engine + IM + Online collaborative office system)
semantic-router
System Level Intelligent Router for Mixture-of-Models at Cloud, Data Center and Edge
ai4j
一款JavaSDK用于快速接入AI大模型应用,整合多平台大模型,如OpenAi、智谱Zhipu(ChatGLM)、深度求索DeepSeek、月之暗面Moonshot(Kimi)、腾讯混元Hunyuan、零一万物(01)等等,提供统一的输入输出(对齐OpenAi)消除差异化,优化函数调用(Tool Call),优化RAG调用、支持向量数据库(Pinecone)、内置联网增强,并且支持JDK1.8,为用户提供快速整合AI的能力。
daan
✨Lightweight LLM Client with MCP 🔌 & Characters 👤
openapi
OpenAPI definitions, converters and LLM function calling schema composer.