personal-notes-assistant
A RAG server for your Obsidian vault.
claude mcp add --transport stdio coeusyk-personal-notes-assistant python main.py \ --env LLM_MODEL="model name (e.g., mistral:7b-instruct) for Ollama" \ --env OLLAMA_URL="http://localhost:11434" \ --env MILVUS_HOST="localhost" \ --env MILVUS_PORT="19530" \ --env LLM_PROVIDER="either 'ollama' or 'openai'" \ --env OPENAI_API_KEY="your-openai-api-key (if using OpenAI)" \ --env OBSIDIAN_VAULT_PATH="Path to your Obsidian vault"
How to use
Personal Notes Assistant is a Retrieval-Augmented Generation (RAG) MCP server designed to index your Obsidian vault into a Milvus vector store and answer questions over your notes. It supports querying via either a local LLM deployed with Ollama or the OpenAI API, enabling you to ask complex questions and receive point-in-time accurate responses drawn from your notes. The server continuously watches your vault and keeps the knowledge base synchronized in real-time, so your queries reflect the latest updates to your notes.
How to install
Prerequisites:
- Python 3.9+
- uv (for Python package management)
- Docker and Docker Compose (for Milvus)
- Obsidian vault
- Ollama (optional, for local models)
Setup steps:
-
Clone the repository git clone https://github.com/your/repo.git cd repo
-
Start Milvus with Docker docker-compose up -d
-
Create and activate a Python virtual environment using uv uv venv .venv\Scripts\activate # Windows
On Linux/macOS: source .venv/bin/activate
-
Install dependencies in editable mode uv pip install -e .
-
Configure environment variables Copy the sample env and edit it as needed: cp .env.sample .env
Set OBSIDIAN_VAULT_PATH, LLM_PROVIDER, MILVUS_HOST/PORT, and API keys as appropriate
-
Run the server python main.py
Additional notes
Notes and tips:
- Ensure Milvus is reachable at MILVUS_HOST:MILVUS_PORT; adjust docker-compose if you run Milvus differently.
- Choose LLM_PROVIDER in the environment (.env) to match your setup: ollama for local models or openai for API access.
- If you switch to CUDA-enabled PyTorch, follow the PyTorch CUDA installation steps and reinstall Torch accordingly.
- For local models with Ollama, ensure Ollama is installed and the model specified in LLM_MODEL is downloaded.
- The server watches the Obsidian vault in real-time; changes will be reflected in search results after indexing completes.
- If you encounter authentication issues with OpenAI, verify the API key and ensure it has the required permissions for the selected model.
Related MCP Servers
mcp-aktools
📈 提供股票、加密货币的数据查询和分析功能MCP服务器
zettelkasten
A Model Context Protocol (MCP) server that implements the Zettelkasten knowledge management methodology, allowing you to create, link, explore and synthesize atomic notes through Claude and other MCP-compatible clients.
awsome_kali_MCPServers
awsome kali MCPServers is a set of MCP servers tailored for Kali Linux
rdkit
MCP server that enables language models to interact with RDKit through natural language
packt-netops-ai-workshop
🔧 Build Intelligent Networks with AI
lmstudio-toolpack
A MCP stdio toolpack for local LLMs