redis-memory-backend
npx machina-cli add skill a5c-ai/babysitter/redis-memory-backend --openclawRedis Memory Backend Skill
Capabilities
- Configure Redis for conversation state storage
- Implement message history persistence
- Set up Redis caching for LLM responses
- Configure TTL-based memory expiration
- Implement Redis Pub/Sub for real-time updates
- Design efficient key schemas
Target Processes
- conversational-memory-system
- chatbot-design-implementation
Implementation Details
Core Components
- Message Store: RedisChatMessageHistory
- Cache: LLM response caching
- State Store: Conversation state persistence
- Pub/Sub: Real-time updates
Configuration Options
- Redis connection settings
- Key prefix configuration
- TTL settings
- Serialization format
- Cluster configuration
Key Schema Patterns
- session:{session_id}:messages
- cache:llm:{prompt_hash}
- state:{user_id}:{key}
Best Practices
- Use appropriate data structures
- Configure proper TTLs
- Implement connection pooling
- Monitor memory usage
Dependencies
- redis
- langchain-community (RedisChatMessageHistory)
Source
git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/specializations/ai-agents-conversational/skills/redis-memory-backend/SKILL.mdView on GitHub Overview
This skill provides a Redis-based backend for persisting conversation state, storing message history, and caching LLM responses. It enables TTL-based memory expiration and real-time updates via Redis Pub/Sub, with flexible key schemas for sessions, prompts, and user state.
How This Skill Works
The solution consists of four core components: a Message Store using RedisChatMessageHistory to persist chat history, a Cache to store LLM responses for fast retrieval, a State Store to persist ongoing conversation state, and Pub/Sub for real-time updates. Configuration includes Redis connection settings, key prefix, TTLs, serialization format, and cluster options. Key schemas include session:{session_id}:messages, cache:llm:{prompt_hash}, and state:{user_id}:{key} to organize data efficiently.
When to Use It
- Need persistent conversation history across sessions and agents
- Want fast retrieval of previously generated LLM outputs via caching
- Require real-time updates to clients or dashboards using Pub/Sub
- Require TTL-based memory expiration to prune stale memory
- Design scalable multi-user chat assistants with Redis-backed storage
Quick Start
- Step 1: Configure Redis connection settings, key prefixes, and TTLs
- Step 2: Wire up RedisChatMessageHistory for message storage and enable LLM caching with cache:llm keys
- Step 3: Activate Pub/Sub for real-time updates and run a sample chat to verify flow
Best Practices
- Use appropriate Redis data structures (e.g., lists/hashes) for messages and state
- Configure proper TTLs to balance memory usage and retention
- Implement connection pooling and robust retry strategies
- Monitor memory usage and set eviction policies as needed
- Design and enforce consistent key schemas (session:, cache:, state:)
Example Use Cases
- A customer-support bot retains per-session chat history to provide context-aware responses
- LLM responses are cached to accelerate repeated questions like pricing or policies
- User profiles store cross-session state using state:{user_id}:{key} keys for continuity
- Redis Pub/Sub streams real-time updates to agent dashboards when users respond
- Inactive conversations are automatically pruned using TTL-based expiration