Get the FREE Ultimate OpenClaw Setup Guide →

Vector-Knowledge-Base

A semantic search engine that transforms your documents into an intelligent, searchable knowledge base using vector embeddings and AI

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio i3t4an-vector-knowledge-base uvx main:app --reload --port 8000 --host 0.0.0.0 \
  --env QDRANT_HOST="http://localhost:6333" \
  --env QDRANT_PORT="6333" \
  --env EMBEDDING_MODEL="sentence-transformers/all-mpnet-base-v2"

How to use

Vector Knowledge Base exposes a Python FastAPI backend that powers a semantic search and document management system. It integrates with a Qdrant vector store and supports uploading a variety of document formats, automatic text extraction, chunking, and embedding generation. Through MCP, you can connect external AI agents (such as Claude Desktop or other Model Context Protocol clients) to perform search, create, and manage operations against your knowledge base, enabling agent-driven workflows like query-driven document retrieval or automated note taking. The MCP integration allows agents to request searches, create new documents, or update metadata by sending structured intents that the server understands, streamlining automation and multi-tool collaboration around your documents.

To use the server, run it locally or in your deployment environment. Then connect MCP-enabled agents to the endpoint to perform operations such as searching with natural language prompts, filtering by clusters or dates, and managing the document registry. You can also leverage the built-in UI and REST endpoints (via FastAPI) to upload documents, manage folders, and run administrative actions like exporting data or resetting the database.

How to install

Prerequisites:

  • Python 3.11+ installed on your system
  • Git
  • Optional: Docker & Docker Compose (for containerized deployment)

Step-by-step installation (manual, Python-based):

  1. Clone the repository git clone https://github.com/i3T4AN/Vector-Knowledge-Base.git cd Vector-Knowledge-Base

  2. Create and activate a virtual environment python -m venv venv

    macOS/Linux

    source venv/bin/activate

    Windows

    venv\Scripts\activate

  3. Install dependencies pip install -r requirements.txt

  4. Ensure a Qdrant instance is running (default port 6333). You can run locally via Docker: docker run -d -p 6333:6333 -v ./qdrant_storage:/qdrant/storage:qdrant storage qdrant/qdrant

  5. Run the FastAPI server with uvicorn (or use the MCP-compatible command once configured) uvx main:app --reload --port 8000 --host 0.0.0.0

  6. Access the API (and the frontend if bundled) at http://localhost:8000

Alternative Docker-based deployment (recommended for production):

  • Ensure Docker and Docker Compose are installed
  • Follow the repository's Docker Compose setup instructions in the Quick Start section of the README to start Qdrant, backend, and frontend services together.

Note: On first run, the embedding model (~400MB) will be downloaded automatically. This may take a few minutes.

Additional notes

Tips and common issues:

  • If you run into port conflicts, adjust the --port in the startup command and ensure your firewall allows access to that port.
  • Ensure Qdrant is reachable at the configured host/port (default http://localhost:6333). If you run inside Docker, use the Docker network hostname or set QDRANT_HOST accordingly.
  • The embedding model may take time to download on first run; ensure you have sufficient bandwidth and disk space (~1GB total when including dependencies).
  • For MCP usage, make sure your agent is configured to communicate with the server using the MCP schema (intent-based actions like search, create_document, update_metadata, etc.).
  • Environment variables can be adjusted to point to alternate Qdrant instances, different embedding models, or alternate storage backends as needed.

Related MCP Servers

Sponsor this space

Reach thousands of developers