knowledge-base-cache
Scannednpx machina-cli add skill Dqz00116/skill-lib/knowledge-base-cache --openclawKnowledge Base Cache Skill
Create a structured knowledge repository with layered architecture (hot/cold/warm) and intelligent context management.
Architecture Overview
┌─────────────────────────────────────────────────────────────┐
│ 应用层 (Application) │
│ Agent 核心 │
└──────────────────────────┬──────────────────────────────────┘
│
┌──────────────────────────▼──────────────────────────────────┐
│ 工作记忆层 (Working Memory) │
│ • 上下文组装 • Token预算管理 │
│ • 多源协调 • LRU缓存 │
└─────────────┬───────────────────────────────────────────────┘
│ 标准接口 KnowledgeSource
┌─────────┼─────────┐
▼ ▼ ▼ (预留)
┌───────┐ ┌───────┐ ┌───────┐
│ Hot │ │ Cold │ │ Warm │
│ Cache │ │Storage│ │Vector │
│ Layer │ │ Layer │ │ Layer │
└───┬───┘ └───┬───┘ └───┬───┘
│ │ │
Context Repository Vector DB
Cache Files (Future)
Three-Tier Architecture
| Tier | Technology | Use Case | Status |
|---|---|---|---|
| 🔥 Hot | Context Cache (API) | Full document retrieval, 90% cost savings | ✅ Available |
| ❄️ Cold | Repository Files | Keyword search, browsing, discovery | ✅ Available |
| 🌡️ Warm | Vector DB | Semantic search, precise Q&A | 🔮 Planned |
What This Skill Does
-
Layered Knowledge Storage
repository/ ├── core/ # Core components │ ├── __init__.py # Standard interfaces │ └── working_memory.py # Working Memory layer ├── adapters/ # Layer adapters │ ├── __init__.py │ ├── hot_cache_adapter.py │ ├── cold_storage_adapter.py │ └── warm_cache_adapter.py (reserved) ├── index.json # Knowledge index ├── cache-state.json # Cache status ├── skills/ # Skill knowledge ├── docs/ # Document knowledge └── scripts/ ├── cache_manager.py # Cache management └── cache_helper.py # Helper utilities -
Working Memory Layer
- Unified interface for all knowledge sources
- Automatic context assembly with token budgeting
- LRU cache for repeated queries
- Cross-tier result ranking
-
Context Caching (Hot Layer)
- Full document caching via API
- 90% cost reduction
- 83% latency improvement
-
File-Based Storage (Cold Layer)
- Keyword-based retrieval
- Excerpt generation
- No API costs
-
Auto-Refresh
- Configures cron job for daily refresh
- Keeps caches fresh without manual intervention
Quick Start
Step 1: Initialize Repository
# The repository structure is already created
# If not, run:
python scripts/init_knowledge_base.py
Step 2: Add Knowledge
Add markdown files to appropriate directories:
repository/skills/- Skill documentationrepository/docs/- General documentationrepository/projects/- Project-specific knowledge
Step 3: Build Cache
cd repository
# Initialize index
python scripts/cache_manager.py init
# Build hot cache (Context Caching)
python scripts/cache_manager.py build
# Test the system
python test_phase1.py
Step 4: Use in Your Agent
Modern Approach (Recommended):
from repository.core.working_memory import WorkingMemoryManager
# Initialize once
wm = WorkingMemoryManager({
'max_tokens': 6000,
'allocation': {
'system_prompt': 0.15, # 15%
'conversation': 0.25, # 25%
'retrieved_knowledge': 0.60 # 60%
}
})
# Use in conversations
context = wm.query(
user_query="How do I deploy?",
system_prompt="You are an assistant...",
conversation=history_messages
)
Legacy Approach:
from scripts.cache_helper import get_cache_headers, load_knowledge_context
# Get cache headers for API calls
headers = get_cache_headers()
# Load knowledge context
context = load_knowledge_context()
Step 5: Configure Auto-Refresh
# Add cron job for daily refresh
# Configure in your agent's cron system
Layer Details
🔥 Hot Cache Layer
Purpose: Store frequently accessed complete documents
When to Use:
- Reading full skill documentation
- API reference lookup
- Deployment guides
Implementation: adapters/hot_cache_adapter.py
from adapters.hot_cache_adapter import HotCacheAdapter
from core import RetrievalQuery
hot = HotCacheAdapter()
result = hot.retrieve(RetrievalQuery(
query="Docker deployment",
context_budget=2000,
top_k=3
))
❄️ Cold Storage Layer
Purpose: Keyword-based file retrieval with excerpt generation
When to Use:
- Browsing knowledge base
- Finding relevant files
- Low-cost retrieval
Implementation: adapters/cold_storage_adapter.py
from adapters.cold_storage_adapter import ColdStorageAdapter
from core import RetrievalQuery
cold = ColdStorageAdapter()
result = cold.retrieve(RetrievalQuery(
query="Docker deployment",
context_budget=2000,
top_k=5
))
🌡️ Warm Cache Layer (Planned)
Purpose: Semantic search with vector embeddings
When to Use:
- Precise Q&A
- Semantic similarity matching
- Large knowledge bases
Implementation: Reserved interface in adapters/warm_cache_adapter.py
Working Memory Configuration
Token Budget Allocation
Default allocation (customizable):
| Component | Percentage | Tokens (6K total) |
|---|---|---|
| System Prompt | 15% | 900 |
| Conversation | 25% | 1,500 |
| Retrieved Knowledge | 60% | 3,600 |
Configuration Options
from repository.core.working_memory import WorkingMemoryManager
from repository.core import MemoryAllocation
wm = WorkingMemoryManager({
'max_tokens': 8000, # Total context window
'lru_cache_size': 10, # LRU cache size
'allocation': {
'system_prompt': 0.20, # 20%
'conversation': 0.20, # 20%
'retrieved_knowledge': 0.60 # 60%
},
'repo_path': 'repository' # Repository path
})
Cache Management Commands
| Command | Description |
|---|---|
cache_manager.py init | Scan repository and update index |
cache_manager.py build | Create/update hot caches |
cache_manager.py status | Show cache status |
cache_manager.py refresh | Refresh expired caches |
cache_manager.py stats | Show statistics |
Testing Commands
# Run Phase 1 integration tests
cd repository
python test_phase1.py
# Test individual layers
python -c "from adapters.hot_cache_adapter import HotCacheAdapter; print(HotCacheAdapter().get_stats())"
python -c "from adapters.cold_storage_adapter import ColdStorageAdapter; print(ColdStorageAdapter().get_stats())"
Cost Benefits
Hot Layer (Context Cache)
| Metric | Without Cache | With Cache | Savings |
|---|---|---|---|
| Cost per 1000 queries | ~¥150 | ~¥15 | 90% |
| First token latency | ~30s | ~5s | 83% |
| Monthly cost (daily 50 queries) | ~¥450 | ~¥45 | ¥405 |
Cold Layer (File Storage)
| Metric | Value |
|---|---|
| API Cost | ¥0 (no API calls) |
| Latency | ~10-50ms (local files) |
| Best For | Browsing, discovery, keyword search |
Working Memory Layer
| Metric | Value |
|---|---|
| Context Assembly | Automatic |
| Token Budget | Enforced |
| Multi-Source | Hot + Cold (+ Warm in future) |
| LRU Cache | Reduces repeated queries |
Troubleshooting
Cache Not Working
# Check if caches are active
python scripts/cache_manager.py status
# Rebuild if needed
python scripts/cache_manager.py build
# Verify hot layer
python -c "from adapters.hot_cache_adapter import HotCacheAdapter; print(HotCacheAdapter().is_available())"
Working Memory Not Finding Knowledge
# Debug: Check registered sources
from repository.core.working_memory import WorkingMemoryManager
wm = WorkingMemoryManager()
print(wm.get_stats())
# Debug: Test individual layers
from adapters.hot_cache_adapter import HotCacheAdapter
from adapters.cold_storage_adapter import ColdStorageAdapter
from core import RetrievalQuery
hot = HotCacheAdapter()
cold = ColdStorageAdapter()
query = RetrievalQuery(query="test", context_budget=2000)
print("Hot:", hot.retrieve(query))
print("Cold:", cold.retrieve(query))
API Key Issues
Ensure API key is set in environment or config for hot layer. Cold layer works without API keys.
Path Issues
All paths in generated files are relative (workspace-relative) for portability.
Migration from v1
If you were using the old cache system:
- Old way still works:
cache_helper.pyfunctions unchanged - New way recommended: Use
WorkingMemoryManagerfor better control - Same repository structure: No migration needed
References
- Context Caching documentation
- Component architecture design
Source
git clone https://github.com/Dqz00116/skill-lib/blob/main/knowledge-base-cache/SKILL.mdView on GitHub Overview
Creates a structured knowledge repository with a three-tier architecture: hot, cold, and warm layers. It includes a Working Memory layer, automatic caching, semantic retrieval, and intelligent context assembly. This approach reduces API costs and scales knowledge without limits.
How This Skill Works
A Working Memory manager provides a unified interface for all sources, with token budgeting and an LRU cache for repeated queries. Queries are routed through hot cache (full document context), cold storage (keyword search), and a reserved warm layer (vector DB for semantic search). An auto-refresh mechanism via daily cron keeps caches fresh and cross-tier result ranking ensures the best answer.
When to Use It
- Need fast, cost-efficient full-document retrieval with 90% cost savings and latency improvements (hot layer).
- Require keyword search, browsing, and discovery across a large document set (cold layer).
- Seek semantic search and precise Q&A using vector representations (warm layer, planned in the architecture).
- Scale knowledge experience while minimizing API calls and maintaining performance across tiers.
- Automate cache freshness and reduce manual maintenance with an auto-refresh cron job.
Quick Start
- Step 1: Initialize Repository - Ensure the repository structure exists, or run a setup to create it with the initialization script.
- Step 2: Add Knowledge - Place markdown files into repository/skills and repository/docs (and repository/projects for project knowledge).
- Step 3: Build Cache - Run cache_manager to init and build the hot cache, then run tests to validate the system.
Best Practices
- Design clear KnowledgeSource interfaces and standardize the repository/core structure for working_memory and adapters.
- Keep index.json and cache-state.json accurate and updated after builds and refreshes.
- Prioritize hot cache for frequently queried topics to maximize cost savings and speed.
- Tune token budgets in Working Memory and implement an effective LRU eviction policy.
- Automate cache refresh with cron jobs and routinely validate data integrity across tiers.
Example Use Cases
- AI assistant for enterprise docs using hot cache to retrieve full documents with ~90% cost reduction and ~83% latency improvement.
- Research assistant enabling keyword search and discovery across large paper repositories using the cold layer.
- Legal knowledge base leveraging keyword-based retrieval and excerpt generation in the cold layer for fast legal research.
- Product support bot employing semantic Q&A via planned warm vector DB to provide precise answers.
- Company-wide knowledge base with daily auto-refresh ensuring fresh, consistent information without manual intervention.