mnemotree
Memory module for LLMs and Agents with MCP
claude mcp add --transport stdio kurcontko-mnemotree uvx --from git+https://github.com/kurcontko/mnemotree.git --with mnemotree[mcp_server] mnemotree-mcp \ --env MNEMOTREE_MCP_PERSIST_DIR="/Users/yourname/.mnemotree/chromadb"
How to use
Mnemotree is an MCP server that provides a biologically-inspired memory system for LLMs and agents. It lets MCP clients store, retrieve, and analyze memories with semantic search, importance scoring, and relationship tracking, and it integrates with common MCP tooling. The server is exposed via an MCP transport (for example HTTP) and can be connected to from clients like Claude, Codex, or LangChain-based workflows. You can run it with UVX, pointing at the repository to enable the MCP server endpoint, which hosts memory operations such as remember, recall, and reflect through the MemoryCore and memory store abstraction. When running via MCP, you can configure the persistence directory to store memories locally and choose between storage backends such as ChromaDB, SQLite + sqlite-vec, or Neo4j.
To use the server, configure an MCP entry for mnemotree in your client tools. For example, via Claude desktop or Codex, you specify the uvx-based command, the GitHub source, and the mnemotree-mcp entry, along with a persistence directory. Once configured, you can start the server and make calls to remember content, recall related memories, and reflect on a set of memories to extract insights. If you plan to expose the server over HTTP for multi-client access, you can run the server with the appropriate transport flags and port, and then connect clients to the provided endpoint (e.g., http://localhost:8000/mcp).
How to install
Prerequisites:
- Python 3.10+ installed on your system
- git installed
- Access to install Python packages and run uvx (the MCP runner)
Step-by-step:
-
Clone the repository and navigate into it: git clone https://github.com/kurcontko/mnemotree.git && cd mnemotree
-
Create and activate a Python virtual environment, then install required extras. The project supports separate extras for memory backends and MCP components: uv venv .venv uv pip install -e ".[lite,chroma]"
-
Ensure you have the necessary NLP/NER models if you plan to enable NER features (example for spaCy): uv run python -m spacy download en_core_web_sm
-
If you want to run the MCP server via UVX (recommended for MCP quickstart): uvx --from "git+https://github.com/kurcontko/mnemotree.git" --with "mnemotree[mcp_server]" mnemotree-mcp
-
Optional: prepare a persistence directory and set the environment variable in your MCP config as shown in the example. For example: MNEMOTREE_MCP_PERSIST_DIR=/Users/you/.mnemotree/chromadb
-
If you plan to use HTTP transport for multi-client access, start the server with transport options per the README guidance, for example: uvx --from "git+https://github.com/kurcontko/mnemotree.git" --with "mnemotree[mcp_server]" mnemotree-mcp run --transport http --port 8000
Notes:
- The MNEMOTREE_MCP_PERSIST_DIR variable controls where memories are stored; use an absolute path for consistency across clients.
- You can switch storage backends (ChromaDB, Neo4j, etc.) by adjusting the import/install extras and backend configuration as described in the project docs.
Additional notes
Tips and common issues:\n- Avoid running multiple MCP processes against the same Chroma directory to prevent conflicts. If you need multiple MCP instances, use separate persist directories and ports.
- When using HTTP transport, ensure the port you choose is open and not blocked by a firewall.
- If you run into missing model or backend issues, install the appropriate extras (e.g., [chroma], [neo4j], [lite], [ner_hf], etc.) via uv pip install -e ".[<extra>]".
- Always set MNEMOTREE_MCP_PERSIST_DIR to an absolute path for predictable storage across clients.
- For development, you can point to a local clone of the repo with the local path in your MCP configuration or use the git+ URL as shown in the Quickstart.
Related MCP Servers
mcp-agent
Build effective agents using Model Context Protocol and simple workflow patterns
sdk-typescript
A model-driven approach to building AI agents in just a few lines of code.
aser
Aser is a lightweight, self-assembling AI Agent frame.
station
Station is our open-source runtime that lets teams deploy agents on their own infrastructure with full control.
mesh
One secure endpoint for every MCP server. Deploy anywhere.
AutoDocs
We handle what engineers and IDEs won't: generating and maintaining technical documentation for your codebase, while also providing search with dependency-aware context to help your AI tools understand your codebase and its conventions.