Memory-Plus
π§ π΄πππππ-π·πππ is a lightweight, local RAG memory store for MCP agents. Easily record, retrieve, update, delete, and visualize persistent "memories" across sessionsβperfect for developers working with multiple AI coders (like Windsurf, Cursor, or Copilot) or anyone who wants their AI to actually remember them.
claude mcp add --transport stdio yuchen20-memory-plus uvx -q memory-plus@latest \ --env GOOGLE_API_KEY="<YOUR_API_KEY>"
How to use
Memory-Plus is a local, retrieval-augmented memory store for MCP agents. It lets your agent record, search, update, and visualize persistent memories such as notes, ideas, and session context across runs. Key capabilities include adding memories, retrieving by keywords or topics, fetching the most recent entries, updating or appending to existing memories, visualizing memory relationships, importing documents, and deleting entries. It also supports memory versioning so you can preserve history whenever memories are updated. To use Memory-Plus, configure the MCP server in your environment and run it via UV Runtime with the memory-plus image from npm/Yarn, or use the inspector tooling to test interactions with an MCP setup. When integrated in an MCP workflow, Memory-Plus automatically activates during relevant conversations and stores memory data for future sessions. The Google API Key environment variable is required for embedding features that rely on Gemini embeddings, and you should provide it in your environment or within your MCP settings as GOOGLE_API_KEY. You can test locally using the MCP Inspector or by running a sample agent script that interacts with memory-plus during a session.
Usage flow example:
- Add memory entries during a chat or session using your MCP integration.
- Retrieve memories by keywords or topics to inform current responses.
- Visualize memory relationships to understand context and recall behavior.
- Import documents to memory for richer recall. Update memories to append new information while preserving history.
- Use versioning to track changes to memories over time.
How to install
Prerequisites:
- Python and a functioning Python environment
- Google API Key for Gemini Embeddings (GOOGLE_API_KEY)
- UV runtime (for MCP plugin support)
Step-by-step installation:
-
Install the UV runtime (required by MCP plugins and Memory-Plus):
- Install via pip:
pip install uv - Or install via the provided install scripts for your OS if preferred.
- Install via pip:
-
Acquire and set up Google API Key:
- Get your key from Google AI Studio and set it in your environment:
macOS/Linux:
Windows (PowerShell):export GOOGLE_API_KEY="<YOUR_API_KEY>"setx GOOGLE_API_KEY "<YOUR_API_KEY>" - If using VS Code or IDE settings, you can provide GOOGLE_API_KEY in your MCP config under env.
- Get your key from Google AI Studio and set it in your environment:
macOS/Linux:
-
Run Memory-Plus via UV runtime:
- Use the one-click memory-plus command in VS Code or run via CLI:
uvx -q memory-plus@latest
- Use the one-click memory-plus command in VS Code or run via CLI:
-
Optional: Test with MCP Inspector or example agent:
- Clone the repository and test with Inspector tools as described in the repository:
git clone https://github.com/Yuchen20/Memory-Plus.git cd Memory-Plus npx @modelcontextprotocol/inspector fastmcp run run .\\memory_plus\\mcp.py - For an actual chat session, install dependencies and run the sample agent as shown in the README (e.g., uv run agent_memory.py) after configuring fast-agent and keys.
- Clone the repository and test with Inspector tools as described in the repository:
-
Configure MCP in your environment:
- Create or update your MCP JSON to include the memory-plus server configuration as shown in mcp_config.
Additional notes
Tips and common considerations:
- Ensure GOOGLE_API_KEY is available in the environment where Memory-Plus runs; the embedding features rely on this key.
- Memory-Plus supports memory import, deletion, and versioning; use these to manage what your agent recalls.
- When running Memory-Plus in VS Code or through MCP, you can specify transportType and env variables per your IDEβs MCP settings.
- If you encounter slow first-time dependency downloads, allow some time for the initial setup; subsequent runs will be faster.
- If you modify memory data outside the MCP flow, consider re-indexing or refreshing caches if your setup uses them.
- The MCP configuration uses uvx with memory-plus@latest; you can pin a specific version if needed by replacing latest with a version tag.
Related MCP Servers
mcp-pinecone
Model Context Protocol server to allow for reading and writing from Pinecone. Rudimentary RAG
Gitingest
mcp server for gitingest
mem0
β¨ mem0 MCP Server: A memory system using mem0 for AI applications with model context protocl (MCP) integration. Enables long-term memory for AI agents as a drop-in MCP server.
mcp -memos-py
A Python package enabling LLM models to interact with the Memos server via the MCP interface for searching, creating, retrieving, and managing memos.
mcp -python-template
This template provides a streamlined foundation for building Model Context Protocol (MCP) servers in Python. It's designed to make AI-assisted development of MCP tools easier and more efficient.
Convert-Markdown-PDF
Markdown To PDF Conversion MCP