mie
Persistent memory graph for AI agents. Facts, decisions, entities, and relationships that survive across sessions, tools, and providers. MCP server — works with Claude, Cursor, ChatGPT, and any MCP client.
claude mcp add --transport stdio kraklabs-mie mie --mcp
How to use
MIE (Memory Intelligence Engine) acts as a shared, persistent knowledge graph for your AI agents. It lets Claude, Cursor, ChatGPT, and other MCP clients read from and write to a single, evolving knowledge graph, enabling cross-agent context, decisions, and relationships to survive across sessions and tools. The server exposes a robust toolset via the MCP protocol, allowing agents to store facts, decisions, entities, and events, perform semantic search and graph traversals, invalidate or update content, manage conflicts, and export or repair the graph as needed. With MIE, you can query the graph to retrieve structured context before agents respond, and you can import knowledge from repositories or files using batch operations. The tools are exposed per MCP client and can be invoked through the standard JSON-RPC interface over stdio, enabling seamless integration with your existing agent workflows.
How to install
Prerequisites:
- macOS or Linux environment
- Homebrew installed (for macOS users) or an equivalent package manager
- Basic familiarity with command line operations
Install steps:
-
Tap the MIE Homebrew repository and install the CLI:
- brew tap kraklabs/mie
- brew install mie
-
Initialize MIE (creates default config and local store):
- mie init
- You can also run an interactive setup with: mie init --interview
-
Run the MIE MCP server (example):
- mie --mcp
-
Connect MCP clients (examples):
- Claude / Cursor / etc. can reference the server via an .mcp.json (or .cursor/mcp.json) file with: { "mcpServers": { "mie": { "command": "mie", "args": ["--mcp"] } } }
-
Optional: manage the daemon (if using a shared daemon workflow)
- mie daemon
- Ensure the daemon has the proper permissions and the graph database is accessible.
Note: If you’re not on macOS or prefer other installation methods, you can typically build from source or use the prebuilt binaries provided by the project releases.
Additional notes
Tips and considerations:
- The server exposes 12 MCP tools (e.g., mie_store, mie_query, mie_bulk_store, mie_update, mie_export, mie_repair, mie_status, etc.). Use mie_status to monitor graph health and usage metrics.
- MIE uses an agent-as-evaluator pattern: there is no server-side inference cost for storing; your agent decides what to persist.
- For import workflows, you can leverage mie_bulk_store to batch-import knowledge from files or git histories.
- If running multiple MCP clients, the daemon ensures an exclusive DB lock and multiplexes access. Use mie daemon to manage the lifecycle in multi-client setups.
- When configuring clients, point them to the MCP socket/interface exposed by the mie server, e.g., via JSON-RPC over stdio, as shown in the example .mcp.json and .cursor/.mcp.json files.
- If you encounter issues with data consistency, use mie_conflicts to detect contradictions and mie_repair to rebuild indices and clean embeddings.
Related MCP Servers
web-agent-protocol
🌐Web Agent Protocol (WAP) - Record and replay user interactions in the browser with MCP support
flyto-core
The open-source execution engine for AI agents. 412 modules, MCP-native, triggers, queue, versioning, metering.
MegaMemory
Persistent project knowledge graph for coding agents. MCP server with semantic search, in-process embeddings, and web explorer.
mcp-ragex
MCP server for intelligent code search: semantic (RAG), symbolic (tree-sitter), and regex (ripgrep) search modes. Built for Claude Code and AI coding assistants.
mode-manager
MCP Memory Agent Server - A VS Code chatmode and instruction manager with library integration
cadre-ai
Your AI agent squad for Claude Code. 17 specialized agents, persistent memory, desktop automation, and a common sense engine.