Cursor-history
API service to search vectorized Cursor IDE chat history using LanceDB and Ollama
claude mcp add --transport stdio markelaugust74-cursor-history-mcp docker run -i -p 8001:8001 -v /path/to/your/cursor_chat_history.lancedb:/data/cursor_chat_history.lancedb -e OLLAMA_HOST=http://host.docker.internal:11434 cursor-chat-search-api
How to use
Cursor-history MCP provides a Dockerized FastAPI server that exposes a simple API to search your vectorized Cursor chat history. After running the extraction script to generate a LanceDB database of prompts and embeddings, you start the API in a container. The server exposes endpoints such as POST /search_chat_history for vector similarity search and GET /health to verify connectivity to Ollama and LanceDB. You can use these endpoints to perform Retrieval Augmented Generation (RAG) tasks by passing a query and receiving matching historical prompts with associated metadata. The workflow is designed to keep computation local: embeddings are generated using a local Ollama model (nomic-embed-text) and stored in LanceDB, with the API serving search results from that database.
How to install
Prerequisites:\n- Docker Desktop (Windows/Mac) or Docker Engine (Linux) installed and running.\n- Python 3.7+ (for extraction script, if you plan to run the extractor locally).\n- Ollama installed and running with nomic-embed-text:latest model available.\n\nInstall and set up steps:\n1) Clone the repository and navigate to the project root.\n2) If you plan to run the extraction locally:\n - Install Python dependencies for the extractor:\n pip install -r requirements.txt\n - Ensure WORKSPACE_STORAGE_PATH in cursor_history_extractor.py points to your Cursor data.\n - Start Ollama and verify nomic-embed-text:latest is available: ollama pull nomic-embed-text:latest; ollama run nomic-embed-text:latest.\n3) Run the extraction script to create the LanceDB database (host machine):\n python cursor_history_extractor.py\n This creates ./cursor_chat_history.lancedb on the host.\n4) Build and run the API Docker container (as shown in the MCP config):\n docker build -t cursor-chat-search-api .\n docker run -i -p 8001:8001 -v /path/to/your/cursor_chat_history.lancedb:/data/cursor_chat_history.lancedb -e OLLAMA_HOST="http://host.docker.internal:11434" cursor-chat-search-api\n\nNotes:\n- The volume mount must point to the absolute path of the LanceDB directory on the host so the container can access the database.\n- If running on Linux, adjust OLLAMA_HOST and networking as needed to allow the container to reach the Ollama instance.\n- Ensure the LanceDB table used is named chat_history and that the database path inside the container matches /data/cursor_chat_history.lancedb.
Additional notes
Tips and common issues:\n- Ensure Ollama is reachable from inside the Docker container. If you run Ollama on the host, use the recommended host networking or the host.docker.internal alias (Windows/Mac).\n- The extraction script writes to ./cursor_chat_history.lancedb by default; make sure you have write permissions in the target directory.\n- The API exposes /health to check connections to Ollama and LanceDB; use it to confirm that the database is open and embeddings can be retrieved.\n- When mounting volumes on Windows, use forward slashes and absolute paths, e.g. C:/path/to/cursor_chat_history.lancedb:/data/cursor_chat_history.lancedb.\n- If you change the LanceDB directory, update the mapping in the Docker run command accordingly.\n- The embedding model nomic-embed-text:latest defaults to 768 dimensions; ensure the model version in Ollama matches what the extractor expects.
Related MCP Servers
XHS-Downloader
小红书(XiaoHongShu、RedNote)链接提取/作品采集工具:提取账号发布、收藏、点赞、专辑作品链接;提取搜索结果作品、用户链接;采集小红书作品信息;提取小红书作品下载地址;下载小红书作品文件
ollama -bridge
Extend the Ollama API with dynamic AI tool integration from multiple MCP (Model Context Protocol) servers. Fully compatible, transparent, and developer-friendly, ideal for building powerful local LLM applications, AI agents, and custom chatbots
knowledge-graph-system
Kappa Graph — κ(G). A semantic knowledge graph where knowledge has weight. Extracts concepts, measures grounding strength, preserves disagreement, traces everything to source.
srag
Semantic code search and RAG system written in Rust with tree-sitter chunking, MCP server for IDE integration, prompt injection detection, and secret redaction
Cursor-history
API service to search vectorized Cursor IDE chat history using LanceDB and Ollama
mcp-zenodo
Tool-based LLM integration with Zenodo via the Model Context Protocol (MCP)