Cursor-history
API service to search vectorized Cursor IDE chat history using LanceDB and Ollama
claude mcp add --transport stdio nossim-cursor-history-mcp docker run -p 8000:8000 cursor-history-mcp
How to use
Cursor History MCP exposes a FastAPI-based API to search and retrieve vectorized chat history stored in LanceDB and processed with Ollama for local language model interactions. After starting the Docker container, the API serves endpoints for searching chat history and fetching the full history, enabling efficient, vector-powered retrieval over your Cursor IDE conversations. Use the /search endpoint to query the index with a natural language prompt and receive a structured set of results with IDs, messages, and timestamps. The /history endpoint returns the complete chat history dataset.
Once running, you can leverage Ollama-backed local LLMs for additional processing or augmentation of results, and LanceDB provides fast vector search capabilities for performant queries over large histories. The service is designed to be self-hosted, suitable for local development or on your own server, and is docker-friendly for straightforward deployment.
How to install
Prerequisites:
- Docker installed and running
- Git (optional, for cloning)
- Python 3.8+ (if using local development outside Docker)
Installation steps (Docker):
- Pull or build the Cursor History MCP Docker image. If you have a prebuilt image, you can skip to step 3.
- To build locally (if you have a Dockerfile or build script provided by the project): docker build -t cursor-history-mcp .
- Run the container exposing port 8000 so the API is accessible: docker run -p 8000:8000 cursor-history-mcp
- Open http://localhost:8000/docs to verify the API documentation and available endpoints.
Alternative (local development without Docker):
- Create a Python virtual environment and install dependencies (FastAPI, Uvicorn, LanceDB, Ollama client as needed).
- Run the FastAPI app with Uvicorn, for example: uvicorn main:app --reload --port 8000
- Ensure LanceDB and Ollama are configured and accessible according to the project’s documentation.
Notes:
- The exact image name and tags may vary; use the repository’s release artifacts or Docker image name as provided by the maintainers.
- If you’re behind a firewall or require authentication, configure environment variables or reverse proxy settings accordingly.
Additional notes
Tips and common considerations:
- Verify Docker has sufficient resources (CPU/Memory) for LanceDB embeddings and the local LLM model via Ollama.
- If you update the API or models, rebuild the Docker image to ensure changes are included.
- Firewall rules must allow access to port 8000 when running the container.
- Ensure that Ollama and any required models are installed and accessible on the host or within the container, depending on your deployment approach.
- Check the /docs endpoint for the latest available API routes and request schemas.
Related MCP Servers
ragflow
RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs
cursor-talk-to-figma
TalkToFigma: MCP integration between AI Agent (Cursor, Claude Code) and Figma, allowing Agentic AI to communicate with Figma for reading designs and modifying them programmatically.
solace-agent-mesh
An event-driven framework designed to build and orchestrate multi-agent AI systems. It enables seamless integration of AI agents with real-world data sources and systems, facilitating complex, multi-step workflows.
cursor10x
The Cursor10x MCP is a persistent multi-dimensional memory system for Cursor that enhances AI assistants with conversation context, project history, and code relationships across sessions.
prism -rs
Enterprise-grade Rust implementation of Anthropic's MCP protocol
fast -telegram
Telegram MCP Server and HTTP-MTProto bridge | Multi-user auth, intelligent search, file sending, web setup | Docker & PyPI ready