Neural Memory
Scanned@nhadaututtheky
npx machina-cli add skill @nhadaututtheky/neural-memory --openclawNeuralMemory — Associative Memory for AI Agents
A biologically-inspired memory system that uses spreading activation instead of keyword/vector search. Memories form a neural graph where neurons connect via 20 typed synapses. Frequently co-accessed memories strengthen their connections (Hebbian learning). Stale memories decay naturally. Contradictions are auto-detected.
Why not just vector search? Vector search finds documents similar to your query. NeuralMemory finds conceptually related memories through graph traversal — even when there's no keyword or embedding overlap. "What decision did we make about auth?" activates time + entity + concept neurons simultaneously and finds the intersection.
Setup
1. Install NeuralMemory
pip install neural-memory
nmem init
This creates ~/.neuralmemory/ with a default brain and configures MCP automatically.
2. Configure MCP for OpenClaw
Add to your OpenClaw MCP configuration (~/.openclaw/mcp.json or project openclaw.json):
{
"mcpServers": {
"neural-memory": {
"command": "python3",
"args": ["-m", "neural_memory.mcp"],
"env": {
"NEURALMEMORY_BRAIN": "default"
}
}
}
}
3. Verify
nmem stats
You should see brain statistics (neurons, synapses, fibers).
Tools Reference
Core Memory Tools
| Tool | Purpose | When to Use |
|---|---|---|
nmem_remember | Store a memory | After decisions, errors, facts, insights, user preferences |
nmem_recall | Query memories | Before tasks, when user references past context, "do you remember..." |
nmem_context | Get recent memories | At session start, inject fresh context |
nmem_todo | Quick TODO with 30-day expiry | Task tracking |
Intelligence Tools
| Tool | Purpose | When to Use |
|---|---|---|
nmem_auto | Auto-extract memories from text | After important conversations — captures decisions, errors, TODOs automatically |
nmem_recall (depth=3) | Deep associative recall | Complex questions requiring cross-domain connections |
nmem_habits | Workflow pattern suggestions | When user repeats similar action sequences |
Management Tools
| Tool | Purpose | When to Use |
|---|---|---|
nmem_health | Brain health diagnostics | Periodic checkup, before sharing brain |
nmem_stats | Brain statistics | Quick overview of memory counts |
nmem_version | Brain snapshots and rollback | Before risky operations, version checkpoints |
nmem_transplant | Transfer memories between brains | Cross-project knowledge sharing |
Workflow
At Session Start
- Call
nmem_contextto inject recent memories into your awareness - If user mentions a specific topic, call
nmem_recallwith that topic
During Conversation
- When a decision is made:
nmem_rememberwith type="decision" - When an error occurs:
nmem_rememberwith type="error" - When user states a preference:
nmem_rememberwith type="preference" - When asked about past events:
nmem_recallwith appropriate depth
At Session End
- Call
nmem_autowith action="process" on important conversation segments - This auto-extracts facts, decisions, errors, and TODOs
Examples
Remember a decision
nmem_remember(
content="Use PostgreSQL for production, SQLite for development",
type="decision",
tags=["database", "infrastructure"],
priority=8
)
Recall with spreading activation
nmem_recall(
query="database configuration for production",
depth=1,
max_tokens=500
)
Returns memories found via graph traversal, not keyword matching. Related memories (e.g., "deploy uses Docker with pg_dump backups") surface even without shared keywords.
Trace causal chains
nmem_recall(
query="why did the deployment fail last week?",
depth=2
)
Follows CAUSED_BY and LEADS_TO synapses to trace cause-and-effect chains.
Auto-capture from conversation
nmem_auto(
action="process",
text="We decided to switch from REST to GraphQL because the frontend needs flexible queries. The migration will take 2 sprints. TODO: update API docs."
)
Automatically extracts: 1 decision, 1 fact, 1 TODO.
Key Features
- Zero LLM dependency — Pure algorithmic: regex, graph traversal, Hebbian learning
- Spreading activation — Associative recall through neural graph, not keyword/vector search
- 20 synapse types — Temporal (BEFORE/AFTER), causal (CAUSED_BY/LEADS_TO), semantic (IS_A/HAS_PROPERTY), emotional (FELT/EVOKES), conflict (CONTRADICTS)
- Memory lifecycle — Short-term → Working → Episodic → Semantic with Ebbinghaus decay
- Contradiction detection — Auto-detects conflicting memories, deprioritizes outdated ones
- Hebbian learning — "Neurons that fire together wire together" — memory improves with use
- Temporal reasoning — Causal chain traversal, event sequences, temporal range queries
- Brain versioning — Snapshot, rollback, diff brain state
- Brain transplant — Transfer filtered knowledge between brains
- Vietnamese + English — Full bilingual support for extraction and sentiment
Depth Levels
| Depth | Name | Speed | Use Case |
|---|---|---|---|
| 0 | Instant | <10ms | Quick facts, recent context |
| 1 | Context | ~50ms | Standard recall (default) |
| 2 | Habit | ~200ms | Pattern matching, workflow suggestions |
| 3 | Deep | ~500ms | Cross-domain associations, causal chains |
Notes
- Memories are stored locally in SQLite at
~/.neuralmemory/brains/<brain>.db - No data is sent to external services (unless optional embedding provider is configured)
- Brain isolation: each brain is independent, no cross-contamination
nmem_rememberreturns fiber_id for reference tracking- Priority scale: 0 (trivial) to 10 (critical), default 5
- Memory types: fact, decision, preference, todo, insight, context, instruction, error, workflow, reference
Overview
NeuralMemory is a biologically-inspired memory system that uses spreading activation to form a neural graph. Memories are strengthened with Hebbian learning, decay over time, and contradictions are auto-detected for reliable recall across sessions.
How This Skill Works
Memories are stored as interconnected neurons with typed synapses. When you query or think about a topic, activation spreads through the graph to surface conceptually related memories. Strength updates via Hebbian learning; stale memories decay; contradictions are detected to prevent inconsistent recall; crucially, this operates with zero LLM dependency.
When to Use It
- Remember facts, decisions, errors, or context across sessions
- User asks 'do you remember...' or references past conversations
- Starting a new task — inject relevant context from memory
- After making decisions or encountering errors — store for future reference
- User asks 'why did X happen?' — trace causal chains through memory
Quick Start
- Step 1: Install and initialize: pip install neural-memory; nmem init
- Step 2: Configure MCP for OpenClaw: add neural-memory to your MCP config with NEURALMEMORY_BRAIN set
- Step 3: Verify the setup: run nmem stats to view brain statistics
Best Practices
- Store important decisions and errors immediately after they occur using nmem_remember
- Inject fresh context at session start with nmem_context to prime recall
- Recall prior context with nmem_recall before starting a task or when asked
- Use nmem_todo for time-bound tasks and nmem_version to snapshot critical points
- Periodically review brain health and statistics with nmem_health and nmem_stats
Example Use Cases
- Remember a policy decision about authentication across sessions and recall it when asked, e.g., 'what did we decide about auth?'
- On starting a task, inject context from memory so the agent has immediate context from past decisions
- Store a user preference after a conversation so future interactions honor it
- After an error, remember the sequence of events to trace the cause later
- Trace a causal chain by recalling related events to explain why a result occurred