Get the FREE Ultimate OpenClaw Setup Guide →

AgentDB Vector Search

Scanned
npx machina-cli add skill Microck/ordinary-claude-skills/agentdb-vector-search --openclaw
Files (1)
SKILL.md
8.8 KB

AgentDB Vector Search

What This Skill Does

Implements vector-based semantic search using AgentDB's high-performance vector database with 150x-12,500x faster operations than traditional solutions. Features HNSW indexing, quantization, and sub-millisecond search (<100µs).

Prerequisites

  • Node.js 18+
  • AgentDB v1.0.7+ (via agentic-flow or standalone)
  • OpenAI API key (for embeddings) or custom embedding model

Quick Start with CLI

Initialize Vector Database

# Initialize with default dimensions (1536 for OpenAI ada-002)
npx agentdb@latest init ./vectors.db

# Custom dimensions for different embedding models
npx agentdb@latest init ./vectors.db --dimension 768  # sentence-transformers
npx agentdb@latest init ./vectors.db --dimension 384  # all-MiniLM-L6-v2

# Use preset configurations
npx agentdb@latest init ./vectors.db --preset small   # <10K vectors
npx agentdb@latest init ./vectors.db --preset medium  # 10K-100K vectors
npx agentdb@latest init ./vectors.db --preset large   # >100K vectors

# In-memory database for testing
npx agentdb@latest init ./vectors.db --in-memory

Query Vector Database

# Basic similarity search
npx agentdb@latest query ./vectors.db "[0.1,0.2,0.3,...]"

# Top-k results
npx agentdb@latest query ./vectors.db "[0.1,0.2,0.3]" -k 10

# With similarity threshold (cosine similarity)
npx agentdb@latest query ./vectors.db "0.1 0.2 0.3" -t 0.75 -m cosine

# Different distance metrics
npx agentdb@latest query ./vectors.db "[...]" -m euclidean  # L2 distance
npx agentdb@latest query ./vectors.db "[...]" -m dot        # Dot product

# JSON output for automation
npx agentdb@latest query ./vectors.db "[...]" -f json -k 5

# Verbose output with distances
npx agentdb@latest query ./vectors.db "[...]" -v

Import/Export Vectors

# Export vectors to JSON
npx agentdb@latest export ./vectors.db ./backup.json

# Import vectors from JSON
npx agentdb@latest import ./backup.json

# Get database statistics
npx agentdb@latest stats ./vectors.db

Quick Start with API

import { createAgentDBAdapter, computeEmbedding } from 'agentic-flow/reasoningbank';

// Initialize with vector search optimizations
const adapter = await createAgentDBAdapter({
  dbPath: '.agentdb/vectors.db',
  enableLearning: false,       // Vector search only
  enableReasoning: true,       // Enable semantic matching
  quantizationType: 'binary',  // 32x memory reduction
  cacheSize: 1000,             // Fast retrieval
});

// Store document with embedding
const text = "The quantum computer achieved 100 qubits";
const embedding = await computeEmbedding(text);

await adapter.insertPattern({
  id: '',
  type: 'document',
  domain: 'technology',
  pattern_data: JSON.stringify({
    embedding,
    text,
    metadata: { category: "quantum", date: "2025-01-15" }
  }),
  confidence: 1.0,
  usage_count: 0,
  success_count: 0,
  created_at: Date.now(),
  last_used: Date.now(),
});

// Semantic search with MMR (Maximal Marginal Relevance)
const queryEmbedding = await computeEmbedding("quantum computing advances");
const results = await adapter.retrieveWithReasoning(queryEmbedding, {
  domain: 'technology',
  k: 10,
  useMMR: true,              // Diverse results
  synthesizeContext: true,    // Rich context
});

Core Features

1. Vector Storage

// Store with automatic embedding
await db.storeWithEmbedding({
  content: "Your document text",
  metadata: { source: "docs", page: 42 }
});

2. Similarity Search

// Find similar documents
const similar = await db.findSimilar("quantum computing", {
  limit: 5,
  minScore: 0.75
});

3. Hybrid Search (Vector + Metadata)

// Combine vector similarity with metadata filtering
const results = await db.hybridSearch({
  query: "machine learning models",
  filters: {
    category: "research",
    date: { $gte: "2024-01-01" }
  },
  limit: 20
});

Advanced Usage

RAG (Retrieval Augmented Generation)

// Build RAG pipeline
async function ragQuery(question: string) {
  // 1. Get relevant context
  const context = await db.searchSimilar(
    await embed(question),
    { limit: 5, threshold: 0.7 }
  );

  // 2. Generate answer with context
  const prompt = `Context: ${context.map(c => c.text).join('\n')}
Question: ${question}`;

  return await llm.generate(prompt);
}

Batch Operations

// Efficient batch storage
await db.batchStore(documents.map(doc => ({
  text: doc.content,
  embedding: doc.vector,
  metadata: doc.meta
})));

MCP Server Integration

# Start AgentDB MCP server for Claude Code
npx agentdb@latest mcp

# Add to Claude Code (one-time setup)
claude mcp add agentdb npx agentdb@latest mcp

# Now use MCP tools in Claude Code:
# - agentdb_query: Semantic vector search
# - agentdb_store: Store documents with embeddings
# - agentdb_stats: Database statistics

Performance Benchmarks

# Run comprehensive benchmarks
npx agentdb@latest benchmark

# Results:
# ✅ Pattern Search: 150x faster (100µs vs 15ms)
# ✅ Batch Insert: 500x faster (2ms vs 1s for 100 vectors)
# ✅ Large-scale Query: 12,500x faster (8ms vs 100s at 1M vectors)
# ✅ Memory Efficiency: 4-32x reduction with quantization

Quantization Options

AgentDB provides multiple quantization strategies for memory efficiency:

Binary Quantization (32x reduction)

const adapter = await createAgentDBAdapter({
  quantizationType: 'binary',  // 768-dim → 96 bytes
});

Scalar Quantization (4x reduction)

const adapter = await createAgentDBAdapter({
  quantizationType: 'scalar',  // 768-dim → 768 bytes
});

Product Quantization (8-16x reduction)

const adapter = await createAgentDBAdapter({
  quantizationType: 'product',  // 768-dim → 48-96 bytes
});

Distance Metrics

# Cosine similarity (default, best for most use cases)
npx agentdb@latest query ./db.sqlite "[...]" -m cosine

# Euclidean distance (L2 norm)
npx agentdb@latest query ./db.sqlite "[...]" -m euclidean

# Dot product (for normalized vectors)
npx agentdb@latest query ./db.sqlite "[...]" -m dot

Advanced Features

HNSW Indexing

  • O(log n) search complexity
  • Sub-millisecond retrieval (<100µs)
  • Automatic index building

Caching

  • 1000 pattern in-memory cache
  • <1ms pattern retrieval
  • Automatic cache invalidation

MMR (Maximal Marginal Relevance)

  • Diverse result sets
  • Avoid redundancy
  • Balance relevance and diversity

Performance Tips

  1. Enable HNSW indexing: Automatic with AgentDB, 10-100x faster
  2. Use quantization: Binary (32x), Scalar (4x), Product (8-16x) memory reduction
  3. Batch operations: 500x faster for bulk inserts
  4. Match dimensions: 1536 (OpenAI), 768 (sentence-transformers), 384 (MiniLM)
  5. Similarity threshold: Start at 0.7 for quality, adjust based on use case
  6. Enable caching: 1000 pattern cache for frequent queries

Troubleshooting

Issue: Slow search performance

# Check if HNSW indexing is enabled (automatic)
npx agentdb@latest stats ./vectors.db

# Expected: <100µs search time

Issue: High memory usage

# Enable binary quantization (32x reduction)
# Use in adapter: quantizationType: 'binary'

Issue: Poor relevance

# Adjust similarity threshold
npx agentdb@latest query ./db.sqlite "[...]" -t 0.8  # Higher threshold

# Or use MMR for diverse results
# Use in adapter: useMMR: true

Issue: Wrong dimensions

# Check embedding model dimensions:
# - OpenAI ada-002: 1536
# - sentence-transformers: 768
# - all-MiniLM-L6-v2: 384

npx agentdb@latest init ./db.sqlite --dimension 768

Database Statistics

# Get comprehensive stats
npx agentdb@latest stats ./vectors.db

# Shows:
# - Total patterns/vectors
# - Database size
# - Average confidence
# - Domains distribution
# - Index status

Performance Characteristics

  • Vector Search: <100µs (HNSW indexing)
  • Pattern Retrieval: <1ms (with cache)
  • Batch Insert: 2ms for 100 vectors
  • Memory Efficiency: 4-32x reduction with quantization
  • Scalability: Handles 1M+ vectors efficiently
  • Latency: Sub-millisecond for most operations

Learn More

Source

git clone https://github.com/Microck/ordinary-claude-skills/blob/main/skills_all/agentdb-vector-search/SKILL.mdView on GitHub

Overview

AgentDB Vector Search enables semantic, vector-based document retrieval using AgentDB’s high-performance vector database. It accelerates intelligent searching, similarity matching, and context-aware querying, making it ideal for building RAG systems, semantic search engines, and knowledge bases.

How This Skill Works

Embeddings (from OpenAI or a user model) represent documents as vectors stored in AgentDB. The system uses HNSW indexing with quantization to achieve sub-millisecond search (<100µs) and supports tuning with presets. Queries are embedded and matched against the index, with options for maximal marginal relevance (MMR) and context synthesis, and it can perform hybrid vector+metadata searches for refined results.

When to Use It

  • Building Retrieval-Augmented Generation (RAG) systems requiring fast, accurate document retrieval
  • Implementing semantic search engines with dense vector representations
  • Powering intelligent knowledge bases that need context-aware querying
  • Scaling large document repositories where traditional search is too slow
  • Prototype or test cycles using in-memory or preset configurations before full deployment

Quick Start

  1. Step 1: Initialize the vector database with CLI, e.g. npx agentdb@latest init ./vectors.db and set dimensions or presets (default 1536 for Ada-002, or 768/384 for other models; presets small/medium/large; --in-memory for testing)
  2. Step 2: Store documents with embeddings using the API (computeEmbedding) and insertPattern to persist embedding, text, and metadata
  3. Step 3: Run semantic search with optional MMR and context synthesis (compute query embedding and use adapter.retrieveWithReasoning or equivalent) to retrieve and synthesize results

Best Practices

  • Use a stable embedding model and keep the vector dimensionality consistent across storage and querying
  • Choose an appropriate preset (small/medium/large) based on dataset size to balance memory and speed
  • Enable MMR for diverse results and synthesizeContext for richer retrieved narratives when needed
  • Leverage hybrid search by indexing metadata to improve relevance with domain-specific fields
  • Batch inserts and monitor latency; refresh embeddings as documents are updated to maintain accuracy

Example Use Cases

  • Internal enterprise knowledge base search that surfaces relevant policy and procedure docs
  • Semantic repository for academic papers with dense ranking by relevance
  • Customer support Q&A system that returns context-rich answer snippets
  • Legal and compliance document discovery with similarity-based retrieval
  • Product manual search within a vendor portal to surface relevant instructions and specs

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers