memU
Memory for 24/7 proactive agents like openclaw (moltbot, clawdbot).
claude mcp add --transport stdio nevamind-ai-memu python -m memu \ --env MEMU_CONFIG="default"
How to use
memU is a memory framework designed for 24/7 proactive AI agents. It treats memory as a structured, persistent file-system-like store that agents can read from and write to, enabling long-running, proactive behavior while reducing recurring LLM token costs. The server exposes tooling to monitor, curate, and query memories, and to drive proactive actions based on user intent and context. With memU running, agents continuously capture user goals, preferences, and interactions, building a connected knowledge graph of memories that can be queried, exported, or ported across environments.
To use memU, start the MCP server and interact with the memU module (via its Python interface). The toolset includes capabilities to store memories as structured items, establish cross-references, and mount external conversations or documents as memory inputs. The proactive loop enables the agent to monitor inputs, memorize insights, predict user intent, and run proactive tasks, empowering always-on assistants that can respond, act, and evolve without constant manual prompting.
How to install
Prerequisites:
- Python 3.13+ installed on your machine
- Basic familiarity with Python virtual environments
-
Create and activate a Python virtual environment:
python -m venv venv source venv/bin/activate # on macOS/Linux venv\Scripts\activate # on Windows
-
Install the memU package from PyPI:
pip install memu-py
-
Verify installation:
python -m memu --version # or import memu in a Python shell to test basic import
-
Run the MCP server (details provided in mcp_config):
You can start the server using the configured command, typically via the mcp_config method described above. For the Python module approach:
python -m memu
-
Optional: configure environment variables as needed for your deployment (see additional_notes for common options).
Additional notes
Tips and common issues:
- Ensure Python 3.13+ is installed and accessible in your PATH.
- If memory data is large, consider persistent storage options and proper disk I/O tuning.
- Use MEMU_CONFIG to switch between default, development, or production memory profiles.
- When deploying in containers, bind-mount memory stores to preserve data across restarts.
- If the server fails to start, check that the memu module is installed correctly and that there are no port conflicts with other services.
- Review memory item schemas: organize memories under categories and topics to improve query performance and maintainability.
- For debugging, enable verbose logging in MEMU_CONFIG or through standard Python logging configuration to trace memory capture, linking, and proactive task execution.
Related MCP Servers
EverMemOS
Long-term memory OS for your agents across LLMs and platforms.
mcp-memory-service
Open-source persistent memory for AI agent pipelines (LangGraph, CrewAI, AutoGen) and Claude. REST API + knowledge graph + autonomous consolidation.
Sentient
A personal AI assistant for everyone
recall
Persistent cross-session memory for Claude & AI agents. Self-host on Redis/Valkey, or use the managed SaaS at recallmcp.com.
nowledge-mem
Memory and context manager just works.
cursor10x
The Cursor10x MCP is a persistent multi-dimensional memory system for Cursor that enhances AI assistants with conversation context, project history, and code relationships across sessions.