Get the FREE Ultimate OpenClaw Setup Guide →

llamaindex-agent

Scanned
npx machina-cli add skill a5c-ai/babysitter/llamaindex-agent --openclaw
Files (1)
SKILL.md
1.2 KB

LlamaIndex Agent Skill

Capabilities

  • Set up LlamaIndex query engines
  • Configure ReAct agents with tools
  • Implement OpenAI function calling agents
  • Design sub-question query engines
  • Set up multi-document agents
  • Implement chat engines with memory

Target Processes

  • rag-pipeline-implementation
  • knowledge-base-qa

Implementation Details

Agent Types

  1. ReActAgent: Reasoning and acting agent
  2. OpenAIAgent: Function calling agent
  3. StructuredPlannerAgent: Plan-and-execute style
  4. SubQuestionQueryEngine: Complex query decomposition

Query Engine Types

  • VectorStoreIndex query engine
  • Summary index query engine
  • Knowledge graph query engine
  • SQL query engine

Configuration Options

  • LLM selection
  • Tool definitions
  • Memory configuration
  • Verbose/debug settings
  • Query transform modules

Best Practices

  • Appropriate index selection
  • Clear tool descriptions
  • Memory for multi-turn
  • Monitor query performance

Dependencies

  • llama-index
  • llama-index-agent-openai

Source

git clone https://github.com/a5c-ai/babysitter/blob/main/plugins/babysitter/skills/babysit/process/specializations/ai-agents-conversational/skills/llamaindex-agent/SKILL.mdView on GitHub

Overview

Sets up LlamaIndex query engines and agents for retrieval augmented generation. It supports ReAct agents, function calling OpenAI agents, sub question engines, multi document agents, and chat engines with memory for knowledge base QA.

How This Skill Works

Configure LlamaIndex with an LLM, tool definitions, memory, and optional query transforms. Instantiate an agent type such as ReActAgent or OpenAIAgent wired to a query engine like VectorStoreIndex or Knowledge Graph, then run the rag pipeline for knowledge base QA.

When to Use It

  • Building a RAG-powered QA system over a knowledge base
  • Setting up agents with tools for ReAct and function calling
  • Aggregating content from multiple documents with multi-document agents
  • Creating chat engines with memory for long-running conversations
  • Designing sub-question query engines for complex queries

Quick Start

  1. Step 1: Install dependencies including llama-index and llama-index-agent-openai
  2. Step 2: Pick an agent type (ReActAgent or OpenAIAgent) and define the tools
  3. Step 3: Configure a query engine (VectorStoreIndex, Summary, Knowledge Graph, or SQL) with memory and run the rag-pipeline-implementation

Best Practices

  • Choose an appropriate index: VectorStoreIndex, Summary Index, Knowledge Graph, or SQL, based on data and latency
  • Provide clear, descriptive tool definitions to improve user intent
  • Enable memory for multi-turn conversations to maintain context
  • Monitor and profile query performance to ensure responsiveness
  • Tune configuration options including LLM, tools, memory, and query transforms during development and deployment

Example Use Cases

  • A customer-support bot using a VectorStoreIndex over product docs to answer FAQs
  • An RAG agent that decomposes complex queries with SubQuestionQueryEngine and returns structured results
  • An OpenAI function-calling agent that automates tasks such as ticket creation via tools
  • A multi-document knowledge-base QA across PDFs and web docs using a Knowledge Graph engine
  • A chat-enabled assistant with memory for ongoing advisor-style conversations

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers