Get the FREE Ultimate OpenClaw Setup Guide →
n

Neural Memory

Scanned

@nhadaututtheky

npx machina-cli add skill @nhadaututtheky/neural-memory --openclaw
Files (1)
SKILL.md
6.8 KB

NeuralMemory — Associative Memory for AI Agents

A biologically-inspired memory system that uses spreading activation instead of keyword/vector search. Memories form a neural graph where neurons connect via 20 typed synapses. Frequently co-accessed memories strengthen their connections (Hebbian learning). Stale memories decay naturally. Contradictions are auto-detected.

Why not just vector search? Vector search finds documents similar to your query. NeuralMemory finds conceptually related memories through graph traversal — even when there's no keyword or embedding overlap. "What decision did we make about auth?" activates time + entity + concept neurons simultaneously and finds the intersection.

Setup

1. Install NeuralMemory

pip install neural-memory
nmem init

This creates ~/.neuralmemory/ with a default brain and configures MCP automatically.

2. Configure MCP for OpenClaw

Add to your OpenClaw MCP configuration (~/.openclaw/mcp.json or project openclaw.json):

{
  "mcpServers": {
    "neural-memory": {
      "command": "python3",
      "args": ["-m", "neural_memory.mcp"],
      "env": {
        "NEURALMEMORY_BRAIN": "default"
      }
    }
  }
}

3. Verify

nmem stats

You should see brain statistics (neurons, synapses, fibers).

Tools Reference

Core Memory Tools

ToolPurposeWhen to Use
nmem_rememberStore a memoryAfter decisions, errors, facts, insights, user preferences
nmem_recallQuery memoriesBefore tasks, when user references past context, "do you remember..."
nmem_contextGet recent memoriesAt session start, inject fresh context
nmem_todoQuick TODO with 30-day expiryTask tracking

Intelligence Tools

ToolPurposeWhen to Use
nmem_autoAuto-extract memories from textAfter important conversations — captures decisions, errors, TODOs automatically
nmem_recall (depth=3)Deep associative recallComplex questions requiring cross-domain connections
nmem_habitsWorkflow pattern suggestionsWhen user repeats similar action sequences

Management Tools

ToolPurposeWhen to Use
nmem_healthBrain health diagnosticsPeriodic checkup, before sharing brain
nmem_statsBrain statisticsQuick overview of memory counts
nmem_versionBrain snapshots and rollbackBefore risky operations, version checkpoints
nmem_transplantTransfer memories between brainsCross-project knowledge sharing

Workflow

At Session Start

  1. Call nmem_context to inject recent memories into your awareness
  2. If user mentions a specific topic, call nmem_recall with that topic

During Conversation

  1. When a decision is made: nmem_remember with type="decision"
  2. When an error occurs: nmem_remember with type="error"
  3. When user states a preference: nmem_remember with type="preference"
  4. When asked about past events: nmem_recall with appropriate depth

At Session End

  1. Call nmem_auto with action="process" on important conversation segments
  2. This auto-extracts facts, decisions, errors, and TODOs

Examples

Remember a decision

nmem_remember(
  content="Use PostgreSQL for production, SQLite for development",
  type="decision",
  tags=["database", "infrastructure"],
  priority=8
)

Recall with spreading activation

nmem_recall(
  query="database configuration for production",
  depth=1,
  max_tokens=500
)

Returns memories found via graph traversal, not keyword matching. Related memories (e.g., "deploy uses Docker with pg_dump backups") surface even without shared keywords.

Trace causal chains

nmem_recall(
  query="why did the deployment fail last week?",
  depth=2
)

Follows CAUSED_BY and LEADS_TO synapses to trace cause-and-effect chains.

Auto-capture from conversation

nmem_auto(
  action="process",
  text="We decided to switch from REST to GraphQL because the frontend needs flexible queries. The migration will take 2 sprints. TODO: update API docs."
)

Automatically extracts: 1 decision, 1 fact, 1 TODO.

Key Features

  • Zero LLM dependency — Pure algorithmic: regex, graph traversal, Hebbian learning
  • Spreading activation — Associative recall through neural graph, not keyword/vector search
  • 20 synapse types — Temporal (BEFORE/AFTER), causal (CAUSED_BY/LEADS_TO), semantic (IS_A/HAS_PROPERTY), emotional (FELT/EVOKES), conflict (CONTRADICTS)
  • Memory lifecycle — Short-term → Working → Episodic → Semantic with Ebbinghaus decay
  • Contradiction detection — Auto-detects conflicting memories, deprioritizes outdated ones
  • Hebbian learning — "Neurons that fire together wire together" — memory improves with use
  • Temporal reasoning — Causal chain traversal, event sequences, temporal range queries
  • Brain versioning — Snapshot, rollback, diff brain state
  • Brain transplant — Transfer filtered knowledge between brains
  • Vietnamese + English — Full bilingual support for extraction and sentiment

Depth Levels

DepthNameSpeedUse Case
0Instant<10msQuick facts, recent context
1Context~50msStandard recall (default)
2Habit~200msPattern matching, workflow suggestions
3Deep~500msCross-domain associations, causal chains

Notes

  • Memories are stored locally in SQLite at ~/.neuralmemory/brains/<brain>.db
  • No data is sent to external services (unless optional embedding provider is configured)
  • Brain isolation: each brain is independent, no cross-contamination
  • nmem_remember returns fiber_id for reference tracking
  • Priority scale: 0 (trivial) to 10 (critical), default 5
  • Memory types: fact, decision, preference, todo, insight, context, instruction, error, workflow, reference

Source

git clone https://clawhub.ai/nhadaututtheky/neural-memoryView on GitHub

Overview

NeuralMemory is a biologically-inspired memory system that uses spreading activation to form a neural graph. Memories are strengthened with Hebbian learning, decay over time, and contradictions are auto-detected for reliable recall across sessions.

How This Skill Works

Memories are stored as interconnected neurons with typed synapses. When you query or think about a topic, activation spreads through the graph to surface conceptually related memories. Strength updates via Hebbian learning; stale memories decay; contradictions are detected to prevent inconsistent recall; crucially, this operates with zero LLM dependency.

When to Use It

  • Remember facts, decisions, errors, or context across sessions
  • User asks 'do you remember...' or references past conversations
  • Starting a new task — inject relevant context from memory
  • After making decisions or encountering errors — store for future reference
  • User asks 'why did X happen?' — trace causal chains through memory

Quick Start

  1. Step 1: Install and initialize: pip install neural-memory; nmem init
  2. Step 2: Configure MCP for OpenClaw: add neural-memory to your MCP config with NEURALMEMORY_BRAIN set
  3. Step 3: Verify the setup: run nmem stats to view brain statistics

Best Practices

  • Store important decisions and errors immediately after they occur using nmem_remember
  • Inject fresh context at session start with nmem_context to prime recall
  • Recall prior context with nmem_recall before starting a task or when asked
  • Use nmem_todo for time-bound tasks and nmem_version to snapshot critical points
  • Periodically review brain health and statistics with nmem_health and nmem_stats

Example Use Cases

  • Remember a policy decision about authentication across sessions and recall it when asked, e.g., 'what did we decide about auth?'
  • On starting a task, inject context from memory so the agent has immediate context from past decisions
  • Store a user preference after a conversation so future interactions honor it
  • After an error, remember the sequence of events to trace the cause later
  • Trace a causal chain by recalling related events to explain why a result occurred

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers