obsidian-notebook
MCP server to let claude connect to my obsidian notes for vector and full text search
claude mcp add --transport stdio jarmentor-obsidian-notebook-mcp node /path/to/ai-note-searcher-5000/mcp-server.js \ --env MCP_SERVER="true" \ --env OLLAMA_URL="http://127.0.0.1:11434" \ --env QDRANT_URL="http://127.0.0.1:6333" \ --env NOTEBOOK_PATH="/path/to/your/notebook"
How to use
This MCP server powers an Obsidian-oriented semantic search system. It exposes a set of tools that allow an LLM to perform high-quality searches over your Obsidian notes, retrieve full note contents, and perform file/directory related operations as needed by complex prompts. The server relies on a local vector store (Qdrant) populated with embeddings generated from your Obsidian vault using Ollama, and it uses the MCP protocol to make its capabilities accessible to an LLM client. Typical usage involves starting the stack with Docker, ensuring Ollama is available for embeddings, and pointing the MCP client at the server (e.g., Claude Desktop) via the provided configuration snippet. Available MCP tools include search_notes for semantic note search and get_note_content for retrieving complete note text, along with additional file management tools to support broader prompts. Integrating this with Claude Desktop or another LLM interface requires configuring the MCP server entry with the path to mcp-server.js and environment variables that point at your Qdrant and Ollama instances as well as your Obsidian notebook path.
How to install
Prerequisites:
- Docker and Docker Compose installed on your machine
- Ollama installed and running locally with the nomic-embed-text:latest model
- Access to your Obsidian vault/notebook folder
Installation steps:
- Clone the repository and navigate to the project directory:
git clone <repository-url>
cd ai-note-searcher-5000
- Ensure Docker Compose file points to your notebook path and that Ollama is pulling the embedding model:
- Update docker-compose.yml volumes to mount your Obsidian notebook, for example:
volumes:
- /path/to/your/obsidian/notebook:/app/notebook:ro
- Pull the embedding model ( Ollama ):
ollama pull nomic-embed-text:latest
- Start the services:
docker-compose up
- Verify services:
- Qdrant should be accessible at http://localhost:6333
- Ollama at the configured URL (default http://127.0.0.1:11434)
- MCP server will be available to LLM clients once the stack is up
For local development without Docker, install dependencies and run the dev server per the repository’s package.json scripts, ensuring Qdrant and Ollama are available locally.
Additional notes
Tips and common considerations:
- If you see "fetch failed" errors in MCP responses, use 127.0.0.1 instead of localhost for service URLs in your client configuration.
- If you don’t see search results, check Docker logs and ensure that the file watcher is processing your Obsidian vault and that embeddings have been generated.
- MCP JSON parsing errors can occur if the MCP_SERVER flag isn’t enabled; enabling MCP_SERVER=true can help with clean console logging and MCP message handling.
- Ensure the Qdrant data directory is persisted (e.g., docker-compose volumes) to avoid data loss when restarting containers.
- For production, consider configuring additional environment variables for security and performance tuning, such as enabling authentication for Qdrant or restricting accessible endpoints.
Related MCP Servers
Vibe-Coder
Vibe-Coder-MCP server extends AI assistants with specialized software development tools.
claude-crew
A CLI tool for enhancing Claude Desktop with additional capabilities and workflows
mcp-obsidian
MCP server for Obsidian (TypeScript + Bun)
mcp-install-instructions-generator
Generate MCP Server Installation Instructions for Cursor, Visual Studio Code, Claude Code, Claude Desktop, Windsurf, ChatGPT, Gemini CLI and more
mongo
MCP server that provide tools to LLMs such as claude in cursor to interact with MongoDB
mcp -webscan
A Model Context Protocol (MCP) server for web content scanning and analysis. This server provides tools for fetching, analyzing, and extracting information from web pages.