Get the FREE Ultimate OpenClaw Setup Guide →

mem0-integration

npx machina-cli add skill a5c-ai/babysitter/mem0-integration --openclaw
Files (1)
SKILL.md
7.2 KB

mem0-integration

Integrate Mem0 (formerly MemGPT) as a universal memory layer for AI agents. Enable persistent memory storage, semantic search across memories, and personalized context retrieval.

Overview

Mem0 provides intelligent memory management for AI applications:

  • Persistent storage of conversation history and facts
  • Semantic search across stored memories
  • User-specific memory isolation
  • Automatic memory extraction from conversations
  • Support for local and cloud deployments

Capabilities

Memory Operations

  • Add memories from text or conversations
  • Search memories semantically
  • Retrieve relevant context by user/agent
  • Update and delete memories
  • Get memory history with timestamps

Memory Types

  • Conversation memories (dialogue history)
  • Fact memories (extracted information)
  • Preference memories (user preferences)
  • Entity memories (people, places, things)

Storage Backends

  • Local SQLite/JSON storage
  • PostgreSQL for production
  • Qdrant vector database integration
  • Cloud-hosted Mem0 platform

Integration Patterns

  • LangChain memory integration
  • Direct API usage
  • MCP server connectivity
  • CrewAI and AutoGen compatibility

Usage

Basic Setup

from mem0 import Memory

# Initialize with default local storage
m = Memory()

# Or with custom configuration
config = {
    "vector_store": {
        "provider": "qdrant",
        "config": {
            "host": "localhost",
            "port": 6333,
        }
    },
    "llm": {
        "provider": "openai",
        "config": {
            "model": "gpt-4o-mini",
            "temperature": 0.1,
        }
    }
}
m = Memory.from_config(config)

Adding Memories

# Add memory from conversation
messages = [
    {"role": "user", "content": "I prefer dark mode for all my applications"},
    {"role": "assistant", "content": "I'll remember that you prefer dark mode."}
]
m.add(messages, user_id="user123")

# Add memory from plain text
m.add("User works at Acme Corp as a software engineer", user_id="user123")

# Add with metadata
m.add(
    "Prefers Python over JavaScript",
    user_id="user123",
    metadata={"category": "preferences", "confidence": 0.9}
)

Searching Memories

# Search for relevant memories
results = m.search(
    query="What are the user's preferences?",
    user_id="user123",
    limit=5
)

for memory in results:
    print(f"Memory: {memory['memory']}")
    print(f"Relevance: {memory['score']}")
    print(f"Created: {memory['created_at']}")

Getting All Memories

# Get all memories for a user
all_memories = m.get_all(user_id="user123")

# Filter by metadata
filtered = m.get_all(
    user_id="user123",
    metadata={"category": "preferences"}
)

Memory History

# Get memory changes over time
history = m.history(memory_id="mem_abc123")

for entry in history:
    print(f"Version: {entry['version']}")
    print(f"Content: {entry['memory']}")
    print(f"Updated: {entry['updated_at']}")

LangChain Integration

from langchain_openai import ChatOpenAI
from mem0 import MemoryClient

# Initialize Mem0 client
mem0_client = MemoryClient(api_key="your-api-key")

# Create LLM with memory-enhanced context
llm = ChatOpenAI(model="gpt-4")

def chat_with_memory(user_message: str, user_id: str) -> str:
    # Retrieve relevant memories
    memories = mem0_client.search(user_message, user_id=user_id, limit=5)
    memory_context = "\n".join([m["memory"] for m in memories])

    # Build prompt with memory context
    system_prompt = f"""You are a helpful assistant.

Here is what you remember about this user:
{memory_context}

Use this context to personalize your response."""

    # Generate response
    response = llm.invoke([
        {"role": "system", "content": system_prompt},
        {"role": "user", "content": user_message}
    ])

    # Store new memory from conversation
    mem0_client.add(
        [
            {"role": "user", "content": user_message},
            {"role": "assistant", "content": response.content}
        ],
        user_id=user_id
    )

    return response.content

MCP Server Usage

# Using Mem0 MCP server with Claude
# Configure in claude_desktop_config.json:
{
    "mcpServers": {
        "mem0": {
            "command": "npx",
            "args": ["-y", "@mem0/mcp-server"]
        }
    }
}

Task Definition

const mem0IntegrationTask = defineTask({
  name: 'mem0-integration-setup',
  description: 'Configure Mem0 memory layer for AI agent',

  inputs: {
    storageBackend: { type: 'string', default: 'local' },  // 'local', 'qdrant', 'postgres', 'cloud'
    vectorDimension: { type: 'number', default: 1536 },
    embeddingModel: { type: 'string', default: 'text-embedding-3-small' },
    memoryCategories: { type: 'array', default: ['facts', 'preferences', 'conversations'] },
    userIsolation: { type: 'boolean', default: true }
  },

  outputs: {
    configured: { type: 'boolean' },
    memoryStats: { type: 'object' },
    artifacts: { type: 'array' }
  },

  async run(inputs, taskCtx) {
    return {
      kind: 'skill',
      title: `Configure Mem0 with ${inputs.storageBackend} backend`,
      skill: {
        name: 'mem0-integration',
        context: {
          storageBackend: inputs.storageBackend,
          vectorDimension: inputs.vectorDimension,
          embeddingModel: inputs.embeddingModel,
          memoryCategories: inputs.memoryCategories,
          userIsolation: inputs.userIsolation,
          instructions: [
            'Validate storage backend availability',
            'Configure embedding model and vector dimensions',
            'Set up memory categories and metadata schemas',
            'Implement user isolation if enabled',
            'Create memory add/search/retrieve functions',
            'Test memory operations with sample data',
            'Document integration patterns for the application'
          ]
        }
      },
      io: {
        inputJsonPath: `tasks/${taskCtx.effectId}/input.json`,
        outputJsonPath: `tasks/${taskCtx.effectId}/result.json`
      }
    };
  }
});

Applicable Processes

  • conversational-memory-system
  • long-term-memory-management
  • chatbot-design-implementation
  • conversational-persona-design

External Dependencies

  • mem0ai Python package
  • Vector database (optional: Qdrant, Pinecone)
  • LLM provider (OpenAI, Anthropic, etc.)
  • Mem0 Platform API key (for cloud)

References

Related Skills

  • SK-MEM-001 zep-memory-integration
  • SK-MEM-003 redis-memory-backend
  • SK-MEM-004 memory-summarization
  • SK-MEM-005 entity-memory-extraction

Related Agents

  • AG-MEM-001 memory-architect
  • AG-MEM-002 user-profile-builder
  • AG-MEM-003 semantic-memory-curator

Source

git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/specializations/ai-agents-conversational/skills/mem0-integration/SKILL.mdView on GitHub

Overview

Mem0-integration enables AI agents to persistently store and retrieve memories. It provides semantic search across memories, user-specific isolation, and support for local and cloud deployments, making long-term context and personalization practical.

How This Skill Works

The integration uses Mem0 as a universal memory layer. You create a Memory instance, add memories (text or conversations), and query semantically to retrieve relevant context for a user. It supports multiple memory types (conversation, fact, preference, entity) and backends (SQLite/JSON, PostgreSQL, Qdrant, Mem0 cloud), and can be wired via LangChain or direct API usage.

When to Use It

  • You need long-term personalization and recall of user preferences across sessions.
  • You want to preserve and reference past conversations to maintain continuity.
  • You require semantic search over memories to surface relevant context for responses.
  • You deploy AI agents across local and cloud environments with multiple storage backends.
  • You want automatic memory extraction from conversations to build richer context.

Quick Start

  1. Step 1: Initialize Mem0 with default local storage: m = Memory().
  2. Step 2: Or configure and instantiate with: config = { ... }; m = Memory.from_config(config).
  3. Step 3: Add memories (m.add(...)) and query (m.search(...)) to build memory-informed responses.

Best Practices

  • Define clear memory types (conversation, fact, preference, entity) and their schemas.
  • Tag memories with metadata and timestamps to improve filtering and retrieval.
  • Choose storage backends based on deployment: local for prototyping, PostgreSQL or Qdrant for production.
  • Regularly prune low-signal or outdated memories and respect user privacy with isolation.
  • Leverage LangChain memory patterns or MCP-ready APIs for seamless LLM integration.

Example Use Cases

  • A customer support bot remembers user preferences and past issues to personalize responses.
  • A personal assistant recalls prior tasks, deadlines, and agenda items across sessions.
  • An enterprise AI agent surfaces relevant past notes using semantic search during workstreams.
  • An e-commerce assistant tailors recommendations based on prior purchases and interactions.
  • A research assistant aggregates and references facts from previous conversations to answer complex questions.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers