memorix
Cross-Agent Memory Bridge Persistent memory for AI coding agents across Cursor, Windsurf, Claude Code, Codex, Copilot, Kiro via MCP. Never re-explain your project again.
claude mcp add --transport stdio avids2-memorix memorix serve
How to use
Memorix provides a persistent memory layer for AI coding agents. It stores decisions, observations, and context across sessions, enabling agents to recall past work without re-explaining or re-creating prior context. The server exposes a suite of MCP tools under the memorix namespace, including memory queries (memorix_store, memorix_search, memorix_detail, memorix_timeline, memorix_resolve, memorix_deduplicate, memorix_suggest_topic_key), session handling (memorix_session_start, memorix_session_end, memorix_session_context), and a knowledge graph interface (create_entities, create_relations, add_observations, delete_entities, delete_observations, delete_relations, search_nodes, open_nodes, read_graph) that is compatible with the MCP Official Memory Server. There are also workspace synchronization utilities (memorix_workspace_sync, memorix_rules_sync, memorix_skills), maintenance tools (memorix_retention, memorix_consolidate, memorix_export, memorix_import), and a web UI dashboard (memorix_dashboard).
To use Memorix, install and run the server, then configure your MCP clients to point to memorix serve. For example, in your MCP config, set the memorix server with command memorix and arguments serve. You can opt into hybrid search, enabling BM25 by default and optionally enabling embedding-based search via MEMORIX_EMBEDDING with options like api (OpenAI-compatible embedding API), fastembed (local ONNX), or transformers (local JS/WASM). When using embedding, you can customize models and API keys with MEMORIX_EMBEDDING_API_KEY, MEMORIX_EMBEDDING_BASE_URL, and MEMORIX_EMBEDDING_DIMENSIONS, depending on your provider.
Memorix also supports auto-memory hooks that capture decisions, errors, and gotchas as you work, with language support in English and Chinese. This enables seamless memory growth across sessions and tools, reducing repetitive context switching and improving agent performance over time.
How to install
Prerequisites:
- Node.js and npm installed on your system (Memorix is published as an npm package).
- A supported MCP environment or agent setup to consume MCP servers.
Step-by-step installation:
-
Install Memorix globally via npm:
npm install -g memorix
-
Verify installation:
memorix --version
-
Start the Memorix server (default development mode):
memorix serve
-
Optional: configure environment for embeddings (if you plan to use embedding-based search):
- Set embedding mode, e.g. MEMORIX_EMBEDDING=api
- Provide API keys and endpoints as needed, e.g. MEMORIX_EMBEDDING_API_KEY=your-key MEMORIX_EMBEDDING_BASE_URL=https://api.openai.com/v1
-
Connect your MCP clients by updating their mcp_config to point to the memorix server with the command memorix and args ["serve"].
-
(Optional) Follow the project-specific full setup guide for additional tuning and troubleshooting.
Additional notes
Tips and considerations:
- Do NOT use npx to run Memorix on every invocation; install globally and use memorix serve to avoid re-downloading on each start.
- If you plan to use embedding features, choose MEMORIX_EMBEDDING=api for OpenAI-compatible APIs or one of the local options (fastembed, transformers) based on your resource availability.
- When deploying in multi-agent environments, ensure memory and workspace synchronization tools are configured consistently across agents to maintain coherent context.
- Review MEMORIX_EMBEDDING_API_KEY and MEMORIX_EMBEDDING_BASE_URL in cloud-based setups; secure handling of keys is essential.
- The Memorix dashboard (memorix_dashboard) provides a visual knowledge graph and observation browser for easier debugging and memory management.
Related MCP Servers
claude-context
Code search MCP for Claude Code. Make entire codebase the context for any coding agent.
Unity
AI-powered bridge connecting LLMs and advanced AI agents to the Unity Editor via the Model Context Protocol (MCP). Chat with AI to generate code, debug errors, and automate game development tasks directly within your project.
Overture
Overture is an open-source, locally running web interface delivered as an MCP (Model Context Protocol) server that visually maps out the execution plan of any AI coding agent as an interactive flowchart/graph before the agent begins writing code.
task-orchestrator
A light touch MCP task orchestration server for AI agents. Persistent work tracking and context storage across sessions and agents. Defines planning floors through composable notes with optional gating transitions. Coordinates multi-agent execution without prescribing how agents do their work.
codingbuddy
Codingbuddy orchestrates 29 specialized AI agents to deliver code quality comparable to a team of human experts through a PLAN → ACT → EVAL workflow.
codebase-context
Local-first Second brain for AI agents working on your codebase - detects your team coding conventions and patterns, brings in persistent memory, code-generation checks, and hybrid search with evidence scoring. Exposed through CLI and MCP server.