Get the FREE Ultimate OpenClaw Setup Guide →
npx machina-cli add skill Orchestra-Research/AI-Research-SKILLs/pinecone --openclaw
Files (1)
SKILL.md
7.6 KB

Pinecone - Managed Vector Database

The vector database for production AI applications.

When to use Pinecone

Use when:

  • Need managed, serverless vector database
  • Production RAG applications
  • Auto-scaling required
  • Low latency critical (<100ms)
  • Don't want to manage infrastructure
  • Need hybrid search (dense + sparse vectors)

Metrics:

  • Fully managed SaaS
  • Auto-scales to billions of vectors
  • p95 latency <100ms
  • 99.9% uptime SLA

Use alternatives instead:

  • Chroma: Self-hosted, open-source
  • FAISS: Offline, pure similarity search
  • Weaviate: Self-hosted with more features

Quick start

Installation

pip install pinecone-client

Basic usage

from pinecone import Pinecone, ServerlessSpec

# Initialize
pc = Pinecone(api_key="your-api-key")

# Create index
pc.create_index(
    name="my-index",
    dimension=1536,  # Must match embedding dimension
    metric="cosine",  # or "euclidean", "dotproduct"
    spec=ServerlessSpec(cloud="aws", region="us-east-1")
)

# Connect to index
index = pc.Index("my-index")

# Upsert vectors
index.upsert(vectors=[
    {"id": "vec1", "values": [0.1, 0.2, ...], "metadata": {"category": "A"}},
    {"id": "vec2", "values": [0.3, 0.4, ...], "metadata": {"category": "B"}}
])

# Query
results = index.query(
    vector=[0.1, 0.2, ...],
    top_k=5,
    include_metadata=True
)

print(results["matches"])

Core operations

Create index

# Serverless (recommended)
pc.create_index(
    name="my-index",
    dimension=1536,
    metric="cosine",
    spec=ServerlessSpec(
        cloud="aws",         # or "gcp", "azure"
        region="us-east-1"
    )
)

# Pod-based (for consistent performance)
from pinecone import PodSpec

pc.create_index(
    name="my-index",
    dimension=1536,
    metric="cosine",
    spec=PodSpec(
        environment="us-east1-gcp",
        pod_type="p1.x1"
    )
)

Upsert vectors

# Single upsert
index.upsert(vectors=[
    {
        "id": "doc1",
        "values": [0.1, 0.2, ...],  # 1536 dimensions
        "metadata": {
            "text": "Document content",
            "category": "tutorial",
            "timestamp": "2025-01-01"
        }
    }
])

# Batch upsert (recommended)
vectors = [
    {"id": f"vec{i}", "values": embedding, "metadata": metadata}
    for i, (embedding, metadata) in enumerate(zip(embeddings, metadatas))
]

index.upsert(vectors=vectors, batch_size=100)

Query vectors

# Basic query
results = index.query(
    vector=[0.1, 0.2, ...],
    top_k=10,
    include_metadata=True,
    include_values=False
)

# With metadata filtering
results = index.query(
    vector=[0.1, 0.2, ...],
    top_k=5,
    filter={"category": {"$eq": "tutorial"}}
)

# Namespace query
results = index.query(
    vector=[0.1, 0.2, ...],
    top_k=5,
    namespace="production"
)

# Access results
for match in results["matches"]:
    print(f"ID: {match['id']}")
    print(f"Score: {match['score']}")
    print(f"Metadata: {match['metadata']}")

Metadata filtering

# Exact match
filter = {"category": "tutorial"}

# Comparison
filter = {"price": {"$gte": 100}}  # $gt, $gte, $lt, $lte, $ne

# Logical operators
filter = {
    "$and": [
        {"category": "tutorial"},
        {"difficulty": {"$lte": 3}}
    ]
}  # Also: $or

# In operator
filter = {"tags": {"$in": ["python", "ml"]}}

Namespaces

# Partition data by namespace
index.upsert(
    vectors=[{"id": "vec1", "values": [...]}],
    namespace="user-123"
)

# Query specific namespace
results = index.query(
    vector=[...],
    namespace="user-123",
    top_k=5
)

# List namespaces
stats = index.describe_index_stats()
print(stats['namespaces'])

Hybrid search (dense + sparse)

# Upsert with sparse vectors
index.upsert(vectors=[
    {
        "id": "doc1",
        "values": [0.1, 0.2, ...],  # Dense vector
        "sparse_values": {
            "indices": [10, 45, 123],  # Token IDs
            "values": [0.5, 0.3, 0.8]   # TF-IDF scores
        },
        "metadata": {"text": "..."}
    }
])

# Hybrid query
results = index.query(
    vector=[0.1, 0.2, ...],
    sparse_vector={
        "indices": [10, 45],
        "values": [0.5, 0.3]
    },
    top_k=5,
    alpha=0.5  # 0=sparse, 1=dense, 0.5=hybrid
)

LangChain integration

from langchain_pinecone import PineconeVectorStore
from langchain_openai import OpenAIEmbeddings

# Create vector store
vectorstore = PineconeVectorStore.from_documents(
    documents=docs,
    embedding=OpenAIEmbeddings(),
    index_name="my-index"
)

# Query
results = vectorstore.similarity_search("query", k=5)

# With metadata filter
results = vectorstore.similarity_search(
    "query",
    k=5,
    filter={"category": "tutorial"}
)

# As retriever
retriever = vectorstore.as_retriever(search_kwargs={"k": 10})

LlamaIndex integration

from llama_index.vector_stores.pinecone import PineconeVectorStore

# Connect to Pinecone
pc = Pinecone(api_key="your-key")
pinecone_index = pc.Index("my-index")

# Create vector store
vector_store = PineconeVectorStore(pinecone_index=pinecone_index)

# Use in LlamaIndex
from llama_index.core import StorageContext, VectorStoreIndex

storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context)

Index management

# List indices
indexes = pc.list_indexes()

# Describe index
index_info = pc.describe_index("my-index")
print(index_info)

# Get index stats
stats = index.describe_index_stats()
print(f"Total vectors: {stats['total_vector_count']}")
print(f"Namespaces: {stats['namespaces']}")

# Delete index
pc.delete_index("my-index")

Delete vectors

# Delete by ID
index.delete(ids=["vec1", "vec2"])

# Delete by filter
index.delete(filter={"category": "old"})

# Delete all in namespace
index.delete(delete_all=True, namespace="test")

# Delete entire index
index.delete(delete_all=True)

Best practices

  1. Use serverless - Auto-scaling, cost-effective
  2. Batch upserts - More efficient (100-200 per batch)
  3. Add metadata - Enable filtering
  4. Use namespaces - Isolate data by user/tenant
  5. Monitor usage - Check Pinecone dashboard
  6. Optimize filters - Index frequently filtered fields
  7. Test with free tier - 1 index, 100K vectors free
  8. Use hybrid search - Better quality
  9. Set appropriate dimensions - Match embedding model
  10. Regular backups - Export important data

Performance

OperationLatencyNotes
Upsert~50-100msPer batch
Query (p50)~50msDepends on index size
Query (p95)~100msSLA target
Metadata filter~+10-20msAdditional overhead

Pricing (as of 2025)

Serverless:

  • $0.096 per million read units
  • $0.06 per million write units
  • $0.06 per GB storage/month

Free tier:

  • 1 serverless index
  • 100K vectors (1536 dimensions)
  • Great for prototyping

Resources

Source

git clone https://github.com/Orchestra-Research/AI-Research-SKILLs/blob/main/15-rag/pinecone/SKILL.mdView on GitHub

Overview

Pinecone is a fully managed, auto-scaling vector database designed for production AI applications. It offers hybrid search (dense + sparse), metadata filtering, and namespaces, delivering low latency p95 under 100ms. Ideal for production RAG, recommendations, or semantic search at scale with serverless infrastructure.

How This Skill Works

Install the pinecone-client, initialize a Pinecone instance, and create an index with a defined dimension and metric. Upsert vectors with optional metadata, then query to retrieve top-k results, using metadata filtering and namespaces as needed. Pinecone handles the underlying managed infrastructure, auto-scaling to billions of vectors, and delivers low-latency serving.

When to Use It

  • When you need a managed, serverless vector database for production AI applications.
  • For production RAG workflows requiring low p95 latency (<100ms).
  • When auto-scaling to billions of vectors is essential.
  • When you need hybrid search (dense + sparse) and metadata filtering.
  • When building semantic search or recommendations at scale with serverless infrastructure.

Quick Start

  1. Step 1: Install the Pinecone client: pip install pinecone-client
  2. Step 2: Initialize and create a serverless index using a ServerlessSpec, e.g., cloud='aws', region='us-east-1', dimension=1536, metric='cosine'
  3. Step 3: Upsert vectors and run queries, using batch upserts and optional metadata filtering or namespaces (e.g., namespace='production')

Best Practices

  • Prefer the ServerlessSpec (serverless deployment) for production environments to minimize ops.
  • Upsert vectors in batches (e.g., batch_size ~100) to maximize throughput and efficiency.
  • Ensure your embedding dimension matches the index dimension exactly.
  • Use metadata filtering and include_metadata to refine results efficiently.
  • Organize environments with namespaces (e.g., production) to isolate data and access.

Example Use Cases

  • Production RAG chatbot for customer support using dense embeddings and metadata filters.
  • E-commerce product recommendations at scale with fast similarity search.
  • Internal semantic search across manuals and docs with namespace isolation.
  • Multitenant applications leveraging namespaces to separate customer data.
  • Low-latency content-based retrieval in media workflows.

Frequently Asked Questions

Add this skill to your agents

Related Skills

modal-serverless-gpu

Orchestra-Research/AI-Research-SKILLs

Serverless GPU cloud platform for running ML workloads. Use when you need on-demand GPU access without infrastructure management, deploying ML models as APIs, or running batch jobs with automatic scaling.

llamaindex

Orchestra-Research/AI-Research-SKILLs

Data framework for building LLM applications with RAG. Specializes in document ingestion (300+ connectors), indexing, and querying. Features vector indices, query engines, agents, and multi-modal support. Use for document Q&A, chatbots, knowledge retrieval, or building RAG pipelines. Best for data-centric LLM applications.

faiss

Orchestra-Research/AI-Research-SKILLs

Facebook's library for efficient similarity search and clustering of dense vectors. Supports billions of vectors, GPU acceleration, and various index types (Flat, IVF, HNSW). Use for fast k-NN search, large-scale vector retrieval, or when you need pure similarity search without metadata. Best for high-performance applications.

crewai-multi-agent

Orchestra-Research/AI-Research-SKILLs

Multi-agent orchestration framework for autonomous AI collaboration. Use when building teams of specialized agents working together on complex tasks, when you need role-based agent collaboration with memory, or for production workflows requiring sequential/hierarchical execution. Built without LangChain dependencies for lean, fast execution.

langchain

Orchestra-Research/AI-Research-SKILLs

Framework for building LLM-powered applications with agents, chains, and RAG. Supports multiple providers (OpenAI, Anthropic, Google), 500+ integrations, ReAct agents, tool calling, memory management, and vector store retrieval. Use for building chatbots, question-answering systems, autonomous agents, or RAG applications. Best for rapid prototyping and production deployments.

langsmith-observability

Orchestra-Research/AI-Research-SKILLs

LLM observability platform for tracing, evaluation, and monitoring. Use when debugging LLM applications, evaluating model outputs against datasets, monitoring production systems, or building systematic testing pipelines for AI applications.

Sponsor this space

Reach thousands of developers