agent-mem
High-performance, pure Go memory middleware for AI Agents (Claude/Cursor). Features active file watching, auto-distillation via LLM, and 'Single Source of Truth' versioning. Native MCP support.
claude mcp add --transport stdio junknet-agent-mem docker run -i junknet-agent-mem
How to use
Agent Memory (Project Cortex) exposes a Go-based MCP server that acts as a lightweight, high-performance knowledge base middleware. It wires together ingestion, semantic search, and arbitration logic with a standard MCP interface, enabling clients like Claude Desktop, Cursor, and Gemini CLI to interact via MCP calls as well as HTTP endpoints. Key MCP tools you can invoke include mem.ingest_memory to add or update memory, mem.search for semantic retrieval, mem.get to fetch full memory content, mem.timeline to query memories over time, and mem.list_projects to obtain project summaries. The server can operate over HTTP or via stdio (MCP client mode), and can emit MCP streams over SSE for real-time updates. To start using it, run the binary with the desired transport (http or stdio) and point clients to the server endpoints or SSE stream as described in the README.
Once running, you can use the HTTP API to interact with memories and projects, for example POST /ingest/memory to write memory, GET /memories/search for semantic search, GET /memories to retrieve full content, GET /memories/timeline for time-based queries, and GET /projects for project summaries. If you enable a token, requests must include Authorization: Bearer <token> or X-Agent-Mem-Token: <token>, or supply the token as a query parameter for the endpoints. The MCP interface also supports streamable HTTP via /mcp and SSE via /sse for continuous updates.
For clients, integration examples are provided for Claude Desktop, Codex CLI, Gemini CLI, and Cursor, demonstrating how to configure MCP servers and, when needed, how to supply tokens for secure access. The system also includes a “conflict detection” workflow that uses vector similarity thresholds and LLM arbitration to decide between REPLACE, KEEP_BOTH, or SKIP when ingesting new content.
How to install
Prerequisites:
- Docker and Docker Compose installed
- Go 1.25+ if you want to build the Go binary locally
- PostgreSQL with pgvector (for local testing, via docker-compose)
Recommended setup (local development):
- Start the database (with pgvector) using Docker Compose:
docker-compose up -d
- Build the Agent Memory MCP binary (Go):
# From repository root
cd mcp-go && go build -o ../agent-mem ./cmd/agent-mem-mcp && cd ..
- Prepare environment configuration (example):
cp .env.example .env
# edit .env to set keys like DASHSCOPE_API_KEY, DATABASE_URL, AGENT_MEM_HTTP_TOKEN, AGENT_MEM_OWNER_ID
- Run the server in HTTP or stdio mode:
# HTTP mode (default)
./agent-mem --transport http --host 127.0.0.1 --port 8787
# STDIO mode (MCP client)
./agent-mem --transport stdio
- Optional PATH utility installation (if you want to run from anywhere):
./bin/agent-mem-install
- Verify installation by invoking MCP tools, for example:
# In HTTP mode, test ingestion
curl -X POST http://127.0.0.1:8787/ingest/memory -H 'Content-Type: application/json' -d '{"id":"1","content":"example memory"}'
Prerequisites recap:
- Docker for the database and optional containerized deployment
- Go for building the MCP binary (optional if you use a prebuilt binary or container image)
- A configured PostgreSQL database with pgvector support
- Configuration of environment variables as described in the README (API keys, tokens, DB URL, etc.)
Additional notes
Tips and common considerations:
- If you upgrade, the migration logic runs on startup and may add new fields (e.g., avg_embedding, summary, tags). Old data without those fields may be excluded from arbitration until you backfill.
- Use --reset-db --reset-only to rebuild the database when upgrading or when starting fresh with small datasets.
- If you enable HTTP tokens, requests can be secured using the Authorization header or token query parameters; ensure your clients are updated accordingly.
- The MCP tools (mem.ingest_memory, mem.search, mem.get, mem.timeline, mem.list_projects) are the primary programmatic interfaces; leverage them for integration tests and automation.
- The system emphasizes a “single truth” approach with conflict arbitration based on vector similarity and LLM judgment; tune similarity thresholds and model configurations in config/settings.yaml if needed.
- When running via Docker, ensure the container has network access to PostgreSQL (host or docker-compose network) and proper environment variables for DB connection and tokens.
- For client integration, you can configure Claude Desktop, Codex CLI, Gemini CLI, and Cursor with the agent-mem MCP server URL (and tokens if enabled).
Related MCP Servers
grepai
Semantic Search & Call Graphs for AI Agents (100% Local)
vestige
Cognitive memory for AI agents — FSRS-6 spaced repetition, 29 brain modules, 3D dashboard, single 22MB Rust binary. MCP server for Claude, Cursor, VS Code, Xcode, JetBrains.
task-orchestrator
A light touch MCP task orchestration server for AI agents. Persistent work tracking and context storage across sessions and agents. Defines planning floors through composable notes with optional gating transitions. Coordinates multi-agent execution without prescribing how agents do their work.
shodh-memory
Cognitive memory for AI agents — learns from use, forgets what's irrelevant, strengthens what matters. Single binary, fully offline.
CodeMCP
Code intelligence for AI assistants - MCP server, CLI, and HTTP API with symbol navigation, impact analysis, and architecture mapping
mie
Persistent memory graph for AI agents. Facts, decisions, entities, and relationships that survive across sessions, tools, and providers. MCP server — works with Claude, Cursor, ChatGPT, and any MCP client.