Get the FREE Ultimate OpenClaw Setup Guide →

redis-memory-backend

npx machina-cli add skill a5c-ai/babysitter/redis-memory-backend --openclaw
Files (1)
SKILL.md
1.2 KB

Redis Memory Backend Skill

Capabilities

  • Configure Redis for conversation state storage
  • Implement message history persistence
  • Set up Redis caching for LLM responses
  • Configure TTL-based memory expiration
  • Implement Redis Pub/Sub for real-time updates
  • Design efficient key schemas

Target Processes

  • conversational-memory-system
  • chatbot-design-implementation

Implementation Details

Core Components

  1. Message Store: RedisChatMessageHistory
  2. Cache: LLM response caching
  3. State Store: Conversation state persistence
  4. Pub/Sub: Real-time updates

Configuration Options

  • Redis connection settings
  • Key prefix configuration
  • TTL settings
  • Serialization format
  • Cluster configuration

Key Schema Patterns

  • session:{session_id}:messages
  • cache:llm:{prompt_hash}
  • state:{user_id}:{key}

Best Practices

  • Use appropriate data structures
  • Configure proper TTLs
  • Implement connection pooling
  • Monitor memory usage

Dependencies

  • redis
  • langchain-community (RedisChatMessageHistory)

Source

git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/specializations/ai-agents-conversational/skills/redis-memory-backend/SKILL.mdView on GitHub

Overview

This skill provides a Redis-based backend for persisting conversation state, storing message history, and caching LLM responses. It enables TTL-based memory expiration and real-time updates via Redis Pub/Sub, with flexible key schemas for sessions, prompts, and user state.

How This Skill Works

The solution consists of four core components: a Message Store using RedisChatMessageHistory to persist chat history, a Cache to store LLM responses for fast retrieval, a State Store to persist ongoing conversation state, and Pub/Sub for real-time updates. Configuration includes Redis connection settings, key prefix, TTLs, serialization format, and cluster options. Key schemas include session:{session_id}:messages, cache:llm:{prompt_hash}, and state:{user_id}:{key} to organize data efficiently.

When to Use It

  • Need persistent conversation history across sessions and agents
  • Want fast retrieval of previously generated LLM outputs via caching
  • Require real-time updates to clients or dashboards using Pub/Sub
  • Require TTL-based memory expiration to prune stale memory
  • Design scalable multi-user chat assistants with Redis-backed storage

Quick Start

  1. Step 1: Configure Redis connection settings, key prefixes, and TTLs
  2. Step 2: Wire up RedisChatMessageHistory for message storage and enable LLM caching with cache:llm keys
  3. Step 3: Activate Pub/Sub for real-time updates and run a sample chat to verify flow

Best Practices

  • Use appropriate Redis data structures (e.g., lists/hashes) for messages and state
  • Configure proper TTLs to balance memory usage and retention
  • Implement connection pooling and robust retry strategies
  • Monitor memory usage and set eviction policies as needed
  • Design and enforce consistent key schemas (session:, cache:, state:)

Example Use Cases

  • A customer-support bot retains per-session chat history to provide context-aware responses
  • LLM responses are cached to accelerate repeated questions like pricing or policies
  • User profiles store cross-session state using state:{user_id}:{key} keys for continuity
  • Redis Pub/Sub streams real-time updates to agent dashboards when users respond
  • Inactive conversations are automatically pruned using TTL-based expiration

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers