Get the FREE Ultimate OpenClaw Setup Guide →

personal-notes-assistant

A RAG server for your Obsidian vault.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio coeusyk-personal-notes-assistant python main.py \
  --env LLM_MODEL="model name (e.g., mistral:7b-instruct) for Ollama" \
  --env OLLAMA_URL="http://localhost:11434" \
  --env MILVUS_HOST="localhost" \
  --env MILVUS_PORT="19530" \
  --env LLM_PROVIDER="either 'ollama' or 'openai'" \
  --env OPENAI_API_KEY="your-openai-api-key (if using OpenAI)" \
  --env OBSIDIAN_VAULT_PATH="Path to your Obsidian vault"

How to use

Personal Notes Assistant is a Retrieval-Augmented Generation (RAG) MCP server designed to index your Obsidian vault into a Milvus vector store and answer questions over your notes. It supports querying via either a local LLM deployed with Ollama or the OpenAI API, enabling you to ask complex questions and receive point-in-time accurate responses drawn from your notes. The server continuously watches your vault and keeps the knowledge base synchronized in real-time, so your queries reflect the latest updates to your notes.

How to install

Prerequisites:

  • Python 3.9+
  • uv (for Python package management)
  • Docker and Docker Compose (for Milvus)
  • Obsidian vault
  • Ollama (optional, for local models)

Setup steps:

  1. Clone the repository git clone https://github.com/your/repo.git cd repo

  2. Start Milvus with Docker docker-compose up -d

  3. Create and activate a Python virtual environment using uv uv venv .venv\Scripts\activate # Windows

    On Linux/macOS: source .venv/bin/activate

  4. Install dependencies in editable mode uv pip install -e .

  5. Configure environment variables Copy the sample env and edit it as needed: cp .env.sample .env

    Set OBSIDIAN_VAULT_PATH, LLM_PROVIDER, MILVUS_HOST/PORT, and API keys as appropriate

  6. Run the server python main.py

Additional notes

Notes and tips:

  • Ensure Milvus is reachable at MILVUS_HOST:MILVUS_PORT; adjust docker-compose if you run Milvus differently.
  • Choose LLM_PROVIDER in the environment (.env) to match your setup: ollama for local models or openai for API access.
  • If you switch to CUDA-enabled PyTorch, follow the PyTorch CUDA installation steps and reinstall Torch accordingly.
  • For local models with Ollama, ensure Ollama is installed and the model specified in LLM_MODEL is downloaded.
  • The server watches the Obsidian vault in real-time; changes will be reflected in search results after indexing completes.
  • If you encounter authentication issues with OpenAI, verify the API key and ensure it has the required permissions for the selected model.

Related MCP Servers

Sponsor this space

Reach thousands of developers