heimdall
Your AI Coding Assistant's Long-Term Memory
claude mcp add --transport stdio lcbcfoo-heimdall-mcp-server python -m heimdall
How to use
Heimdall MCP Server acts as a persistent memory layer for your AI coding assistant. It indexes and serves knowledge from your codebase, documentation, and Git history, enabling your LLM to recall project-specific solutions across conversations. The server exposes MCP-compatible endpoints that let you load documentation, query memories, and manage your project memory. To start using it, install the Heimdall MCP package, initialize your project memory, and then run the server via the provided CLI entry point. With the server running, you can leverage its memory tools during your coding sessions to retrieve relevant context, retrieve past decisions, and surface architectural patterns tailored to your repository. The integration is designed to be low-overhead and namespace-isolated per project, so memories from one project won’t leak into another.
How to install
Prerequisites
- Python 3.11+ (as indicated by the project).
- Optional: Docker, if you plan to run Qdrant in a container for vector storage.
Install the Heimdall MCP package
pip install heimdall-mcp
Verify installation
heimdall --version
(Optional) If you want to run via Docker for the vector store, ensure Docker is running and follow the Docker-based setup as described in your project docs (e.g., spin up Qdrant and connect Heimdall to it).
Initialize a project memory (in your project directory)
cd /path/to/your/project
heimdall project init
Load data and enable monitoring
# Copy or symlink your docs into the monitored directory
ln -r -s my-project-docs ./.heimdall/docs/project-docs
# Start automatic monitoring
heimdall monitor start
Manual loading and git integration can also be used
heimdall load docs/ --recursive
heimdall load README.md
heimdall git-load .
If you need to uninstall or clean up
cd /path/to/project
memory_system project clean
Additional notes
Notes and tips:
- Heimdall uses a Qdrant vector database under the hood for semantic search. If you prefer not to manage Qdrant locally, consider using the Docker-based setup where Qdrant runs in a container.
- The memory is stored in a hidden .heimdall/ directory within your project. Do not commit this directory to version control; add it to your .gitignore.
- The MCP tooling exposes a CLI called heimdall that coordinates project memory, docs loading, and Git history integration. You can load documentation manually or enable automatic monitoring for real-time updates.
- If you already have Git hooks, Heimdall can install its own hooks to keep memory updated with commits. Existing hooks are preserved and chained safely.
- For large codebases, the memory index and Git history loading can be resource-intensive; you may want to adjust monitoring intervals and selective loading as needed.
- If you encounter connection issues to Qdrant, verify that the Qdrant service is running and that the Heimdall configuration points to the correct host/port.
- When running in environments without a GUI, ensure environment variables for paths and models are properly set or rely on the defaults provided by heimdall-mcp.
Related MCP Servers
mcp -qdrant
An official Qdrant Model Context Protocol (MCP) server implementation
ollama
An MCP Server for Ollama
omega-memory
Persistent memory for AI coding agents
claude-code-agentic-semantic-memory-system
This guide provides complete instructions for implementing an **Agentic Semantic Memory System** that enables Claude agents to:
mcp-file-edit
A simple MCP server for file system operations, ideal for vide-coding type activities.
memcp
A persistent memory MCP server for Claude Code - Recursive Language Model integration for Claude Code inspired by MIT CSAIL paper