mem0
🧠Production-ready MCP server providing intelligent memory for Claude Code with async architecture, Neo4j knowledge graphs, smart chunking & enterprise security. One-command Docker deployment.
claude mcp add --transport stdio subhashdasyam-mem0-server-mcp uvx mem0-mcp-server \ --env DB_HOST="<db-host-or-connection-string>" \ --env DB_NAME="<db-name>" \ --env DB_PORT="<db-port>" \ --env DB_USER="<db-user>" \ --env MEM0_TOKEN="<your-token-here>" \ --env DB_PASSWORD="<db-password>" \ --env MEM0_USER_ID="<your-user-id-here>" \ --env OLLAMA_BASE_URL="< Ollama server URL, if using Ollama >"
How to use
Mem0 MCP Server provides persistent memory for AI assistants with a rich memory intelligence system. It exposes 13 MCP tools (covering core memory management and advanced intelligence features) and supports multi-LLM backends via Ollama, OpenAI, or Anthropic. You connect Claude Code (or any MCP client) to the Mem0 server over HTTP Stream (/mcp/) or legacy SSE (/sse/), using token-based authentication backed by PostgreSQL. Once connected, you can store memories, perform semantic searches, analyze memory graphs, and run intelligence analyses across your stored data. The system is designed for project isolation per directory, with vector search in Postgres (pgvector) and optional graph persistence in Neo4j. The recommended transport is the HTTP Stream endpoint at http://localhost:8080/mcp/ with the appropriate authentication headers. You can also configure Claude Code to reuse a single mem0 server configuration for consistent memory across sessions. For legacy setups, SSE (/sse/) remains available for backward compatibility.
How to install
Prerequisites:
- Docker and Docker Compose installed (for the one-command deployment workflow described in the project) OR a Python environment capable of running the Mem0 MCP server.
- Python 3.12 (as indicated by the project README) if you choose to run the Python-based server directly.
- Optional Ollama server or API keys for OpenAI/Anthropic integration.
Installation steps (Docker-based workflow highlighted by the repo):
-
Clone the repository: git clone https://github.com/subhashdasyam-mem0-server-mcp.git cd mem0-mcp
-
Create and configure environment variables/config files as needed. Copy example env if provided: cp .env.example .env
Edit .env with your database and Ollama/OpenAI settings
-
Start the entire stack with the provided script (as described in the README): ./scripts/start.sh
This will spin up PostgreSQL with pgvector, Neo4j, the Mem0 REST API, and the MCP server.
-
Run database migrations for authentication and initial setup (as described): ./scripts/migrate-auth.sh
-
Create an authentication token for your user: python3 scripts/mcp-token.py create
--user-id your.email@company.com
--name "Your Name"
--email your.email@company.comCopy the MEM0_TOKEN value produced
-
Configure Claude Code to connect to Mem0 (recommended via HTTP Stream): claude mcp add mem0 http://localhost:8080/mcp/ -t http
-H "X-MCP-Token: ${MEM0_TOKEN}"
-H "X-MCP-UserID: ${MEM0_USER_ID}"
Prerequisites recap:
- Docker & Docker Compose
- Optional Ollama/OpenAI/Anthropic credentials
- Access to the repo's start/migrate scripts as described in the project docs
Additional notes
Tips and notes:
- Always include the trailing slash in MCP URLs (/mcp/ or /sse/).
- HTTP Stream transport (/mcp/) is the recommended, modern MCP protocol; use SSE (/sse/) only for legacy compatibility.
- Ensure MEM0_TOKEN and MEM0_USER_ID are set in your environment when configuring clients.
- If you run into authentication or token errors, re-run the migrations and token creation steps, then reconfigure your client with the new token.
- The Mem0 server uses a combination of PostgreSQL with pgvector for vector search and optional Neo4j for graph storage; ensure these services are healthy and reachable.
- If deploying locally, consider keeping the .env file secure and not checked into version control.
- For updates or scaling, refer to the docker-compose setup described in the README and the one-command deployment script.
Related MCP Servers
roampal-core
Outcome-based memory for Claude Code and OpenCode
mem0 -selfhosted
Self-hosted mem0 MCP server for Claude Code. Run a complete memory server against self-hosted Qdrant + Neo4j + Ollama while using Claude as the main LLM.
statelessagent
Your AI forgets everything between sessions. SAME fixes that. Local-first, no API keys, single binary.
phloem
Local-first AI memory with causal graphs. MCP server for Claude Code, Cursor, VS Code, and any MCP client. Zero network connections.
aguara
MCP server for Aguara. Gives AI agents security scanning as a tool — checks skills, plugins, and configs before install.
muninn
Persistent memory for AI coding agents. MCP server that gives Claude, Cursor, and Windsurf institutional knowledge across sessions