a-mem
A-MEM Agentic Memory System - MCP Server for IDE Integration (Cursor, VSCode) | Dual-Storage: ChromaDB + NetworkX DiGraph with explicit typed edges | Based on Zettelkasten
claude mcp add --transport stdio tobs-code-a-mem-mcp-server uvx a-mem
How to use
A-MEM is an agentic memory system designed to empower LLM agents with a structured, graph-backed memory store. It automatically extracts keywords, tags, and contextual summaries from notes, links related memories, and supports dynamic memory evolution and semantic retrieval through a configurable graph backend. The server exposes MCP tools that enable memory creation, linking, retrieval, and maintenance, with additional capabilities like type classification, priority scoring, and an event log that records critical operations. You can run the MCP server locally and connect your IDE or agent orchestration layer to it via the MCP protocol, enabling direct memory operations from your development environment or agents. The system supports multiple backends for the graph layer (NetworkX by default, with options for RustworkX or FalkorDB), multiple LLM backends through providers such as Ollama or OpenRouter, and local or cloud model interactions through a unified interface. It also includes autonomous background processes (memory enzymes) that maintain graph health, prune weak links, and suggest new connections, all aimed at improving search quality and recall efficiency.
To use the tools, start the MCP server and interact with it using the MCP protocol via your agent or IDE extension. Typical workflows include creating notes, linking related memories, evolving existing memories as new information arrives, and performing complex queries such as finding memories related to a concept through specific relation types. You can also trigger the researcher agent for low-confidence inferences, run local Jina Reader for document retrieval, and enable unstructured PDF extraction for documents. Environment variables configured in a .env file let you switch providers (Ollama or OpenRouter), set model endpoints, define graph backend choices, and tune maintenance behavior. When using via an IDE integration (Cursor, VSCode), you’ll experience explicit graph-based memory linking and typed relationships that support advanced queries like “find memories related to X through type Y.”
How to install
Prerequisites:
- Python 3.8+ (recommended) and pip
- Git (to clone the repository) or a prebuilt package install option
- Optional: Docker for containerized deployment and local Jina Reader setup
Installation steps (two common approaches):
Option A: Install from source
-
Clone the repository: git clone https://github.com/slug/tobs-code-a-mem-mcp-server.git cd tobs-code-a-mem-mcp-server
-
Create and activate a virtual environment (recommended): python3 -m venv venv source venv/bin/activate # on Unix .\venv\Scripts\activate # on Windows
-
Install dependencies: pip install -r requirements.txt
-
Prepare environment variables (see additional_notes for details) and run: uvx a-mem # if using the uvx-based package runner, or python -m a_mem.server # if there is a direct Python entrypoint
Option B: Install as a Python package (recommended for deployment)
-
Install via pip (from PyPI or local build): pip install a-mem
-
Run the server module or executable provided by the package: uvx a-mem
or if the package exposes a direct entrypoint
python -m a_mem.server
-
Ensure your .env file is configured with appropriate providers, graph backend, and model endpoints, then start the server as above.
Additional notes
Tips and common considerations:
- Environment variables: Configure provider endpoints (Ollama or OpenRouter), graph backend (NetworkX, RustworkX, FalkorDB), and any required API keys in a .env file. The system supports experimental metadata fields and safe graph wrapping for robust operations.
- Graph backends: Default to NetworkX, with options to switch to RustworkX or FalkorDB for performance or scalability needs. Changing backends may affect edge types and traversal semantics, so verify migrations if upgrading.
- Model backends: For local testing, Ollama (local) is convenient. For cloud usage, set up OpenRouter or other supported providers with proper authentication.
- Tools and maintenance: The MCP server exposes tools for note creation, linking, memory evolution, and semantic retrieval. It also includes memory enzymes (link pruning, dead-end linking, summary digestion, etc.) that run in the background on an hourly cadence. Monitor data/events.jsonl for a comprehensive audit trail of operations.
- Researcher agent: If confidence in a retrieval is below a threshold, the system can automatically trigger deep web research using MCP tools or HTTP fallbacks (Google, DuckDuckGo, Jina Reader). This can be tuned via environment/config flags.
- Data formats: Notes are stored with explicit relation types, reasoning, and weights to enable complex graph queries like find-related-notes via specific relation types. Ensure your data sources support the expected structure to maximize compatibility with the graph.
- PDF and document ingestion: Unstructured PDF extraction is supported, enabling automatic extraction and indexing of content for retrieval.
Related MCP Servers
bitbucket
Bitbucket MCP - A Model Context Protocol (MCP) server for integrating with Bitbucket Cloud and Server APIs
mie
Persistent memory graph for AI agents. Facts, decisions, entities, and relationships that survive across sessions, tools, and providers. MCP server — works with Claude, Cursor, ChatGPT, and any MCP client.
mem0 -selfhosted
Self-hosted mem0 MCP server for Claude Code. Run a complete memory server against self-hosted Qdrant + Neo4j + Ollama while using Claude as the main LLM.
adyen
Typescript library for integrating Adyen APIs via an MCP server
post-cortex
Post-Cortex provides durable memory infrastructure with automatic knowledge graph construction, intelligent entity extraction, and semantic search powered by local transformer models.
statelessagent
Your AI forgets everything between sessions. SAME fixes that. Local-first, no API keys, single binary.