Get the FREE Ultimate OpenClaw Setup Guide →

RAGLight

RAGLight is a modular framework for Retrieval-Augmented Generation (RAG). It makes it easy to plug in different LLMs, embeddings, and vector stores, and now includes seamless MCP integration to connect external tools and data sources.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio bessouat40-raglight python -m raglight serve \
  --env RAGLIGHT_UI="<true|false>" \
  --env RAGLIGHT_PORT="8000" \
  --env RAGLIGHT_UI_PORT="8501" \
  --env RAGLIGHT_LLM_MODEL="<model-name>" \
  --env RAGLIGHT_LLM_PROVIDER="<provider>" \
  --env RAGLIGHT_VECTOR_STORE="<vector-store>" \
  --env RAGLIGHT_EMBEDDINGS_MODEL="<embeddings-model>"

How to use

RAGLight is a lightweight, modular Python library that enables Retrieval-Augmented Generation by combining document retrieval with language generation. The MCP integration allows you to run the RAGLight REST API server (raglight serve) as an MCP server, enabling external tools and pipelines to interact with the API for document chat, ingestion, and indexing. You can configure the LLM provider, embeddings model, and vector store via environment variables, then start the server to expose endpoints for querying with context from your indexed documents. The included CLI chat tooling and REST API support allow you to interact with your data programmatically or through a web UI when enabled.

How to install

Prerequisites:

  • Python 3.8+ installed on the host
  • Access to install Python packages (pip)

Install the RAGLight package from PyPI:

pip install raglight

Run the MCP-enabled server:

# Example with common environment variables set
RAGLIGHT_LLM_MODEL=mistral-small-latest \
RAGLIGHT_LLM_PROVIDER=Mistral \
RAGLIGHT_EMBEDDINGS_MODEL=all-MiniLM-L6-v2 \
RAGLIGHT_VECTOR_STORE=local \
raglight serve

If you prefer Docker, you can build/run an image that includes the server and dependencies (example commands depend on your Docker setup):

# Build (if you have a Dockerfile)
docker build -t raglight-server .

# Run
docker run -it --rm -p 8000:8000 \
  -e RAGLIGHT_LLM_MODEL=mistral-small-latest \
  -e RAGLIGHT_LLM_PROVIDER=Mistral \
  -e RAGLIGHT_EMBEDDINGS_MODEL=all-MiniLM-L6-v2 \
  raglight-server serve

Optional: you can install and use the CLI locally first to initialize data, then run the REST API server as shown above.

Additional notes

Environment variables control the server behavior. Common variables include RAGLIGHT_LLM_MODEL and RAGLIGHT_LLM_PROVIDER to pick the LLM, RAGLIGHT_EMBEDDINGS_MODEL for embeddings, and RAGLIGHT_VECTOR_STORE for the local or remote index. If UI is enabled (RAGLIGHT_UI=true), the server will also launch a Streamlit-based chat UI on the port defined by RAGLIGHT_UI_PORT. Ensure your chosen LLM and embedding models are compatible with your selected provider. When running behind firewalls or in CI, adjust host/port settings accordingly and consider using HTTPS in production. If you encounter port conflicts, change the port via RAGLIGHT_PORT.

Related MCP Servers

Sponsor this space

Reach thousands of developers