Get the FREE Ultimate OpenClaw Setup Guide →

pydanticai-docs

Scanned
npx machina-cli add skill DougTrajano/pydantic-ai-skills/pydanticai-docs --openclaw
Files (1)
SKILL.md
5.0 KB

Pydantic AI Documentation Skill

What is Pydantic AI?

Pydantic AI is a production-grade Python agent framework for building type-safe, dependency-injected Generative AI applications. It supports multiple LLM providers, structured outputs via Pydantic models, and composable multi-agent patterns.

Doc: https://ai.pydantic.dev/index.md


Core Concepts

1. Agent Instantiation

from pydantic_ai import Agent

agent = Agent(
    'openai:gpt-4o',          # model string: provider:model-name
    system_prompt='Be helpful.',
)
result = agent.run_sync('What is the capital of France?')
print(result.output)

For full constructor parameters, run methods, and streaming: load references/AGENT.md.

2. Function Tools (@agent.tool)

from pydantic_ai import Agent, RunContext

agent = Agent('openai:gpt-4o', deps_type=str)

@agent.tool
def get_user_name(ctx: RunContext[str]) -> str:
    """Return the current user's name."""
    return ctx.deps

result = agent.run_sync('What is my name?', deps='Alice')

Use @agent.tool_plain when you don't need RunContext. For tool registration, return types, and retries: load references/FUNCTION_TOOLS.md.

3. Dependency Injection (RunContext)

from dataclasses import dataclass
from pydantic_ai import Agent, RunContext

@dataclass
class MyDeps:
    api_key: str
    user_id: int

agent = Agent('openai:gpt-4o', deps_type=MyDeps)

@agent.tool
async def fetch_data(ctx: RunContext[MyDeps]) -> str:
    return f'User {ctx.deps.user_id}'

For RunContext fields, injection into system prompts and output validators: load references/DEPENDENCIES.md.

4. Structured Output

from pydantic import BaseModel
from pydantic_ai import Agent

class CityInfo(BaseModel):
    city: str
    country: str

agent = Agent('openai:gpt-4o', output_type=CityInfo)
result = agent.run_sync('Where were the 2012 Olympics held?')
print(result.output)  # CityInfo(city='London', country='United Kingdom')

For union types, plain scalars, output_validator, and partial validation: load references/OUTPUT.md.


Additional Topics

For these topics, load the named reference file or follow the doc link — no implementation code is provided here.

TopicReference fileDoc link
Message history / multi-turn conversationsreferences/MESSAGES.mdhttps://ai.pydantic.dev/message-history/index.md
Model / provider setup (all providers)references/MODELS.mdhttps://ai.pydantic.dev/models/overview/index.md
Toolsets (FunctionToolset, composition)references/TOOLS_AND_TOOLSETS.mdhttps://ai.pydantic.dev/toolsets/index.md
MCP server integrationreferences/MCP.mdhttps://ai.pydantic.dev/mcp/client/index.md
Multi-agent applicationsdoc link onlyhttps://ai.pydantic.dev/multi-agent-applications/index.md
Graphs (pydantic-graph)doc link onlyhttps://ai.pydantic.dev/graph/index.md
Evals (pydantic-evals)doc link onlyhttps://ai.pydantic.dev/evals/index.md
Durable executiondoc link onlyhttps://ai.pydantic.dev/durable_execution/overview/index.md
Retriesdoc link onlyhttps://ai.pydantic.dev/retries/index.md
Testing (TestModel, override)doc link onlyhttps://ai.pydantic.dev/testing/index.md
Logfire integrationdoc link onlyhttps://ai.pydantic.dev/logfire/index.md
Builtin toolsdoc link onlyhttps://ai.pydantic.dev/builtin-tools/index.md
Streamingdoc link onlyhttps://ai.pydantic.dev/agent/index.md

Agent Behavior Rules

  1. Default to this file — answer from core concepts first; load only the specific references/<CONCEPT>.md relevant to the user's question when more depth is needed.
  2. Never fabricate API details — always end with "For details, see: <URL>" using a link from the official index above.
  3. No implementation code for non-core topics — return a doc link only for topics listed in the Additional Topics table.
  4. Prefer specificity — route to the most specific page (e.g., models/anthropic/index.md) when the user's question targets a specific provider, not the overview.
  5. Out of scope — do not debug user code passively, do not generate full production agent implementations, do not answer questions unrelated to the Pydantic AI ecosystem.

Source

git clone https://github.com/DougTrajano/pydantic-ai-skills/blob/main/examples/skills/pydanticai-docs/SKILL.mdView on GitHub

Overview

This skill provides practical guidance for using the Pydantic AI framework to build AI agents, define structured outputs with Pydantic models, wire tools and function calling, configure model providers, and debug runs. It covers core concepts like Agent instantiation, RunContext-based dependency injection, and structured outputs via output models. It also addresses adjacent tasks such as returning JSON, multi-step agents, and validating LLM output with Pydantic.

How This Skill Works

You instantiate an Agent with a provider:model string, register tools using @agent.tool, and inject dependencies via RunContext when needed. Structured outputs are defined with Pydantic BaseModel and passed to the agent via output_type. The skill points to relevant reference files for detailed implementation and examples on tools, dependencies, and model/provider setup.

When to Use It

  • When building an AI agent with Pydantic AI, including provider setup and multi-agent patterns
  • When you need structured outputs defined by Pydantic models
  • When wiring up function tools and RunContext-based dependencies inside an agent
  • When configuring model providers (OpenAI, Anthropic, Gemini, etc.) and handling streaming responses
  • When debugging agent runs or validating LLM output with Pydantic

Quick Start

  1. Step 1: Install and import: from pydantic_ai import Agent
  2. Step 2: Define a Pydantic BaseModel for outputs and create an Agent with output_type=YourModel
  3. Step 3: Optionally register a tool with @agent.tool and run a query using run_sync

Best Practices

  • Define clear Pydantic models for all outputs to enable strict validation
  • Use RunContext and a deps_type to cleanly manage dependencies
  • Register tools with @agent.tool (use @agent.tool_plain when RunContext isn’t needed)
  • Test agent runs with small examples and enable streaming to observe partial results
  • Consult the referenced topic files (models, tools, dependencies, outputs) when extending functionality

Example Use Cases

  • Create a CityInfo BaseModel and instantiate an Agent with output_type=CityInfo to obtain structured results
  • Register a get_user_name tool with @agent.tool and return RunContext to access dependencies
  • Inject dependencies via a dataclass and RunContext to fetch data inside a tool
  • Configure multiple providers and observe streaming responses from the agent
  • Debug an agent run by validating outputs against the Pydantic model and adjusting validators

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers