mnemo
MCP-native embedded memory database for AI agents built in Rust. REMEMBER/RECALL/FORGET/SHARE primitives with hybrid vector search, AES-256-GCM encryption, DuckDB/PostgreSQL backends & SDKs for Python, TypeScript and Go.
claude mcp add --transport stdio sattyamjjain-mnemo ./target/release/mnemo --db-path ./agent.mnemo.db \ --env OPENAI_API_KEY="sk-..."
How to use
Mnemo is an MCP-native memory database that provides persistent memory tools for AI agents. Once the Rust binary is built, you can run Mnemo and connect to it from your MCP client configurations. Mnemo exposes a suite of tools under the mnemo namespace, including remember, recall, forget, and share, plus a collection of advanced capabilities such as checkpointing, branching, merging, replay, delegation, and integrity verification. This enables AI agents to store semantic memories, retrieve them via hybrid semantic and keyword search, manage memory lifecycles, share memories with other agents, and audit memory evolution over time. The server offers multiple protocols via MCP, REST, gRPC, and pgwire integrations, so you can interact with Mnemo from a variety of clients and environments.
To use Mnemo from an MCP client, configure the mcpServers section in your client config to point to the Mnemo binary and provide any needed environment variables. For example, you can start Mnemo with a local database path, and you may supply an API key or other secrets as environment variables if your deployment requires them. Once connected, you can call tools like mnemo.remember to store memories (with embeddings and tags), mnemo.recall to retrieve memories by semantic similarity or keywords, mnemo.forget to delete memories (with soft/hard/delete/decay options), and mnemo.share to share memories with other agents. Additional capabilities include mnemo.checkpoint, mnemo.branch, mnemo.merge, mnemo.replay, mnemo.delegate, and mnemo.verify for lifecycle management and integrity checks. Mnemo’s multi-protocol support also enables you to integrate with dashboards, service-to-service interactions, and SQL clients via pgwire.
SDKs are available across Python, TypeScript, and Go, providing convenient wrappers and toolsets to manage memories from your preferred development environment.
How to install
Prerequisites:
- Rust toolchain (rustup, cargo)
- Git to clone the repository (optional if you already have the source)
Step 1: Install Rust
- macOS/Linux: curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
- Windows: follow the Rustup installation guide at https://rustup.rs/
Step 2: Build Mnemo in release mode
- Open a terminal in the Mnemo project root and run: cargo build --release
Step 3: Prepare your MCP client configuration
- Create or modify your MCP client config to include Mnemo as a server: { "mcpServers": { "mnemo": { "command": "./target/release/mnemo", "args": ["--db-path", "./agent.mnemo.db"], "env": { "OPENAI_API_KEY": "sk-..." } } } }
Step 4: Run Mnemo
- Ensure the working directory contains agent.mnemo.db or allow Mnemo to create it as needed.
- Start the server via your MCP runner after placing the binary at the specified path.
Step 5: Verify installation
- Connect with an MCP client and invoke basic memory operations (remember, recall) to verify the memory store is functioning correctly.
Additional notes
Tips and common considerations:
- OPENAI_API_KEY usage is optional in configurations; provide it only if you rely on OpenAI embeddings or APIs.
- The memory store supports embeddings-based recall; ensure your client provides appropriate memory content and tags for effective retrieval.
- If you plan to use advanced features like checkpointing, branching, or replay, ensure your database backend and storage paths are properly configured and have sufficient permissions.
- Mnemo supports multiple protocols (MCP stdio, REST, gRPC, pgwire); pick the protocol that best fits your integration scenario.
- For production deployments, consider securing environment variables (instead of placing in config) and using a proper orchestration strategy for the Mnemo binary.
Related MCP Servers
model-context-protocol-resources
Exploring the Model Context Protocol (MCP) through practical guides, clients, and servers I've built while learning about this new protocol.
mcp-batchit
🚀 MCP aggregator for batching multiple tool calls into a single request. Reduces overhead, saves tokens, and simplifies complex operations in AI agent workflows.
cortex-scout
An advanced web extraction and meta-search engine for AI agents. It features native parallel searching, Human-in-the-Loop (HITL) authentication fallback, and LLM-optimized data synthesis for deep web research.
mie
Persistent memory graph for AI agents. Facts, decisions, entities, and relationships that survive across sessions, tools, and providers. MCP server — works with Claude, Cursor, ChatGPT, and any MCP client.
Devmind
DevMind MCP provides **persistent memory capabilities** for AI assistants through the Model Context Protocol (MCP). It enables AI to remember context across conversations, automatically track development activities, and retrieve relevant information intelligently.
Cognio
Persistent semantic memory server for MCP - Give your AI long-term memory that survives across conversations. Lightweight Python server with SQLite storage and semantic search.