remind
A memory layer for AI Agents
claude mcp add --transport stdio sandst1-remind uvx remind-mcp --port 8765
How to use
Remind provides a memory-backed MCP server that integrates with AI agents to store and retrieve generalized experiences. It exposes a set of tools to remember, recall, consolidate, and inspect memories, as well as manage episodes and concepts. Clients (like Cursor or other MCP-enabled agents) can connect to the server via an MCP URL and issue commands that operate on an episodic buffer, a semantic concept graph, and a spreading-activation retriever to surface relevant memories. The server supports multiple providers for LLMs/embeddings through a configurable backend, and memory decay to gradually deprioritize rarely recalled concepts.
To use it, run the MCP server on a port and configure your client with the server URL, e.g., http://127.0.0.1:8765/sse?db=my-project. Tools available include remember (store experiences), recall (retrieve memories), consolidate (turn episodes into generalized concepts), inspect (view concepts/episodes), entities and inspect_entity (entity-level views), stats (memory stats), and episode/concept update and restoration commands. This setup enables developers to build memory-enabled agents that improve over time by consolidating experiences into actionable knowledge and linking related concepts through a graph.
How to install
Prerequisites:
- Python 3.11+ installed
- Access to install Python packages (pip)
Installation steps:
- Create and activate a Python virtual environment (optional but recommended): python -m venv venv source venv/bin/activate # on Unix or macOS venv\Scripts\activate # on Windows
- Install the Remind MCP package from PyPI: pip install remind-mcp
- Run the MCP server (example with uvx as shown in docs):
Using uvx (no install required for the server binary beyond the package)
uvx remind-mcp --port 8765
Alternative (if you prefer an npx-like quick start with a node wrapper, if available in future):
- Ensure Python dependencies are installed and use the Python runtime as the server backend.
Prerequisites recap:
- Python 3.11+
- Network access to install packages and bind to a port
Additional notes
Tips:
- The server exposes a database-selection parameter via the MCP URL (db parameter). Each project can have its own database, e.g., ?db=my-project.
- If you are using a provider-based LLM/embedding setup, configure environment variables or a ~/.remind/remind.config.json file with keys such as llm_provider, embedding_provider, and related API keys.
- Decay in memory can be configured to gradually deprioritize rarely recalled concepts; adjust decay settings in the config as needed.
- If you encounter port or binding issues, ensure the port is available and not blocked by a firewall. The MCP URL path is /sse?db=<name> for retrieval.
- Use the available tools (remember, recall, consolidate, inspect, etc.) to iteratively build a richer memory graph and ensure proper entity relationships are established during consolidation.
Related MCP Servers
learn-ai-engineering
Learn AI and LLMs from scratch using free resources
sdk-typescript
A model-driven approach to building AI agents in just a few lines of code.
mesh
One secure endpoint for every MCP server. Deploy anywhere.
ContextPods
Model Context Protocol management suite/factory. An MCP that can generate and manage other local MCPs in multiple languages. Uses the official SDKs for code gen.
symfony
A Symfony package designed for building secure servers based on the Model Context Protocol, utilizing Server-Sent Events (SSE) and/or StreamableHTTP for real-time communication. It offers a scalable tool system tailored for enterprise-grade applications.
mcpx-py
Python client library for https://mcp.run - call portable & secure tools for your AI Agents and Apps