RLM-Memory
A Model Context Protocol (MCP) server that provides AI agents with persistent memory and semantic file discovery.
claude mcp add --transport stdio jumpino27-rlm-memory-mcp-server node dist/index.js \ --env UI_PORT="3848" \ --env GEMINI_API_KEY="your-gemini-api-key"
How to use
RLM Memory MCP Server provides AI agents with persistent memory and a semantic file discovery service. It exposes a set of tools that allow agents to index, search, and retrieve context from tracked projects and memories, enabling richer interactions without direct file-system access. Core capabilities include initializing memory for a codebase, querying files and past memories, indexing existing codebases, and creating enriched memory entries with metadata to improve semantic search and recall. The server exposes a UI for viewing memories and testing tools, and can be integrated with external agents (Claude, Codex, Gemini, etc.) via the provided mcp configurations. Use the primary bi-directional query tool to ask the MCP about relevant files and memories, then rely on memory tooling to refine results and maintain an up-to-date semantic map of your projects.
How to install
Prerequisites:
- Node.js (recommended latest LTS) and npm installed
- Git (optional, for cloning)
- Access to Gemini API key for AI features (see .env guidance below)
Install and run:
- Clone or download the MCP server repository
- Navigate to the project directory
- Install dependencies npm install
- Build the project (if required by this setup) npm run build
- Create configuration and environment variables
- Create a .env file and add your Gemini API key GEMINI_API_KEY=your-gemini-api-key UI_PORT=3848 (optional)
- Start the UI/server
npm start
- The UI will open at http://localhost:3848 by default
Notes:
- The MCP server expects to be run with the node entry point (dist/index.js after build) or an equivalent built artifact as configured in your environment.
- Ensure the environment variable GEMINI_API_KEY is present for AI features that rely on Gemini.
- If you modify the server path or deploy type (e.g., Docker), adjust the mcp_config accordingly.
Additional notes
Tips and considerations:
- Environment variables: Keep GEMINI_API_KEY secure. Do not commit to VCS. Use a .env file or secret management in your deployment environment.
- If you encounter port conflicts, set UI_PORT to an available port (e.g., 3850).
- When integrating with external CLIs (Claude, Codex, Gemini), ensure their mcp configuration points to the correct node entry (dist/index.js) after building the server.
- Regularly run rlm_verify_index to audit indexing status and ensure memory accuracy.
- For debugging, run npm run dev to start UI with auto-reload during development.
- If building for production, consider packaging with your preferred method (Docker, uv, etc.) and adjust mcp_config accordingly.
Related MCP Servers
Remote
A type-safe solution to remote MCP communication, enabling effortless integration for centralized management of Model Context.
modular
A Model Context Protocol (MCP) proxy server that enables efficient management of large tool collections across multiple MCP servers by grouping them and loading tool schemas on-demand.
mcp-gemini
This project provides a dedicated MCP (Model Context Protocol) server that wraps the @google/genai SDK. It exposes Google's Gemini model capabilities as standard MCP tools, allowing other LLMs (like Cline) or MCP-compatible systems to leverage Gemini's features as a backend workhorse.
architect
A powerful, self-extending MCP server for dynamic AI tool orchestration. Features sandboxed JS execution, capability-based security, automated rate limiting, marketplace integration, and a built-in monitoring dashboard. Built for the Model Context Protocol (MCP).
google-scholar
An MCP server for Google Scholar written in TypeScript with Streamable HTTP
docmole
Dig through any documentation with AI - MCP server for Claude, Cursor, and other AI assistants