Get the FREE Ultimate OpenClaw Setup Guide →
m

Chromadb Memory Pub

Verified

@msensintaffar

npx machina-cli add skill @msensintaffar/chromadb-memory --openclaw
Files (1)
SKILL.md
3.6 KB

ChromaDB Memory

Long-term semantic memory backed by ChromaDB and local Ollama embeddings. Zero cloud dependencies.

What It Does

  • Auto-recall: Before every agent turn, queries ChromaDB with the user's message and injects relevant context automatically
  • chromadb_search tool: Manual semantic search over your ChromaDB collection
  • 100% local: Ollama (nomic-embed-text) for embeddings, ChromaDB for vector storage

Prerequisites

  1. ChromaDB running (Docker recommended):

    docker run -d --name chromadb -p 8100:8000 chromadb/chroma:latest
    
  2. Ollama with an embedding model:

    ollama pull nomic-embed-text
    
  3. Indexed documents in ChromaDB. Use any ChromaDB-compatible indexer to populate your collection.

Install

# 1. Copy the plugin extension
mkdir -p ~/.openclaw/extensions/chromadb-memory
cp {baseDir}/scripts/index.ts ~/.openclaw/extensions/chromadb-memory/
cp {baseDir}/scripts/openclaw.plugin.json ~/.openclaw/extensions/chromadb-memory/

# 2. Add to your OpenClaw config (~/.openclaw/openclaw.json):
{
  "plugins": {
    "entries": {
      "chromadb-memory": {
        "enabled": true,
        "config": {
          "chromaUrl": "http://localhost:8100",
          "collectionName": "longterm_memory",
          "ollamaUrl": "http://localhost:11434",
          "embeddingModel": "nomic-embed-text",
          "autoRecall": true,
          "autoRecallResults": 3,
          "minScore": 0.5
        }
      }
    }
  }
}
# 4. Restart the gateway
openclaw gateway restart

Config Options

OptionDefaultDescription
chromaUrlhttp://localhost:8100ChromaDB server URL
collectionNamelongterm_memoryCollection name (auto-resolves UUID, survives reindexing)
collectionIdCollection UUID (optional fallback)
ollamaUrlhttp://localhost:11434Ollama API URL
embeddingModelnomic-embed-textOllama embedding model
autoRecalltrueAuto-inject relevant memories each turn
autoRecallResults3Max auto-recall results per turn
minScore0.5Minimum similarity score (0-1)

How It Works

  1. You send a message
  2. Plugin embeds your message via Ollama (nomic-embed-text, 768 dimensions)
  3. Queries ChromaDB for nearest neighbors
  4. Results above minScore are injected into the agent's context as <chromadb-memories>
  5. Agent responds with relevant long-term context available

Token Cost

Auto-recall adds ~275 tokens per turn worst case (3 results × ~300 chars + wrapper). Against a 200K+ context window, this is negligible.

Tuning

  • Too noisy? Raise minScore to 0.6 or 0.7
  • Missing context? Lower minScore to 0.4, increase autoRecallResults to 5
  • Want manual only? Set autoRecall: false, use chromadb_search tool

Architecture

User Message → Ollama (embed) → ChromaDB (query) → Context Injection
                                                  ↓
                                          Agent Response

No OpenAI. No cloud. Your memories stay on your hardware.

Source

git clone https://clawhub.ai/msensintaffar/chromadb-memoryView on GitHub

Overview

Chromadb Memory provides long-term semantic memory using ChromaDB and local Ollama embeddings. It injects relevant recalled context into each turn, keeping sensitive data on premises with no cloud dependencies.

How This Skill Works

On each message, the plugin embeds the text with Ollama using the nomic-embed-text model. It then queries ChromaDB for nearest memories and injects the top results above a minScore into the agent context before generating a reply.

When to Use It

  • You want persistent memory across user chats without cloud services
  • You require data privacy and local hosting for sensitive tasks
  • You want automatic context injection before every agent turn
  • You want to perform manual semantic search via chromadb_search when needed
  • You are operating in environments with no external API access

Quick Start

  1. Step 1: Run ChromaDB locally (docker run ) and boot Ollama with an embedding model
  2. Step 2: Install the chromadb-memory extension and add it to your OpenClaw config with chromaUrl, collectionName, ollamaUrl, embeddingModel, autoRecall, and minScore
  3. Step 3: Restart the gateway and start using the agent with automatic recall enabled

Best Practices

  • Ensure ChromaDB and Ollama are running locally before enabling the plugin
  • Tune minScore and autoRecallResults to balance relevance with noise
  • Use the default collectionName longterm_memory to survive reindexing
  • Experiment with chromadb_search for controlled memory retrieval
  • Monitor token impact per turn and adjust settings accordingly

Example Use Cases

  • Support agents recall user preferences and prior tickets to resolve issues faster
  • Researchers remember prior sources and summaries to build on previous work
  • Onboarding assistants remember company policies and product docs across sessions
  • Privacy focused assistants store and reference sensitive data entirely on premises
  • Knowledge workers link reminders and notes to current prompts for context

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers