engram-rs
Memory engine for AI agents — time axis (3-layer decay/promotion) + space axis (self-organizing topic tree). Hybrid search, LLM consolidation. Single Rust binary.
claude mcp add --transport stdio kael-bit-engram-rs docker run -i ghcr.io/kael-bit/engram-rs:latest
How to use
Engram-rs is a Rust-based memory engine for AI agents. It stores memories in three layers (Buffer, Working, Core) and uses an automatic decay mechanism, a quality gate for long-term retention, and a self-organizing topic tree to keep knowledge approachable. The server exposes a lightweight HTTP API that lets you store memories, recall by meaning, resume sessions, and interact with triggers and topics. Typical workflows include storing a memory with content and tags, recalling relevant memories by a query, and resuming a session to restore full context. You can also browse and expand topics, and use triggers to fetch groups of relevant lessons before risky operations.
How to install
Prerequisites:
- Docker installed and running
- Basic familiarity with curl or any HTTP client
Installation steps (Docker):
- Pull and run the Engram image:
docker pull ghcr.io/kael-bit/engram-rs:latest
docker run -d --name engRAM -p 3917:3917 ghcr.io/kael-bit/engram-rs:latest
-
Verify the service is running. You should be able to reach the API at http://localhost:3917/
-
Interact with the API (examples below) to store, recall, or resume:
curl -X POST http://localhost:3917/memories \
-d '{"content": "Always run tests before deploying", "tags": ["deploy"]}'
curl -X POST http://localhost:3917/recall \
-d '{"query": "deployment checklist"}'
- When you no longer want the container, stop and remove it:
docker stop engRAM
docker rm engRAM
Alternative: Build from source (Rust) if you prefer to run locally without Docker:
- Ensure Rust toolchain is installed (Rust 1.75+)
- Clone the repository and build the binary
- Run the binary directly with the appropriate flags (see repository docs for exact runtime arguments).
Additional notes
Tips:
- The server assumes port 3917 by default (as shown in examples). If you change the port, update your requests accordingly.
- The API includes endpoints for memories, recall, resume, and topic/navigation features (e.g., POST /memories, POST /recall, GET /resume, POST /topic).
- The memory system emphasizes a balance between recall performance and retention; tune your usage by adjusting tags and the content you store.
- When deploying with Docker, consider mapping persistent storage if you want memory data to survive container restarts (mount a host directory to the container path used by Engram).
- If you encounter network or port issues, ensure no other service is occupying 3917 and that Docker has access to the host network as needed.
Related MCP Servers
ai-trader
Backtrader-powered backtesting framework for algorithmic trading, featuring 20+ strategies, multi-market support, CLI tools, and an integrated MCP server for professional traders.
mcpcat-typescript-sdk
MCPcat is an analytics platform for MCP server owners 🐱.
mcp-framework
Rust MCP framework for building AI agents
post-cortex
Post-Cortex provides durable memory infrastructure with automatic knowledge graph construction, intelligent entity extraction, and semantic search powered by local transformer models.
muxi
An extensible AI agents framework
ultrafast
High-performance, ergonomic Model Context Protocol (MCP) implementation in Rust