Get the FREE Ultimate OpenClaw Setup Guide →

pydantic-ai-agent-creation

Use Caution
npx machina-cli add skill existential-birds/beagle/pydantic-ai-agent-creation --openclaw
Files (1)
SKILL.md
3.6 KB

Creating PydanticAI Agents

Quick Start

from pydantic_ai import Agent

# Minimal agent (text output)
agent = Agent('openai:gpt-4o')
result = agent.run_sync('Hello!')
print(result.output)  # str

Model Selection

Model strings follow provider:model-name format:

# OpenAI
agent = Agent('openai:gpt-4o')
agent = Agent('openai:gpt-4o-mini')

# Anthropic
agent = Agent('anthropic:claude-sonnet-4-5')
agent = Agent('anthropic:claude-haiku-4-5')

# Google
agent = Agent('google-gla:gemini-2.0-flash')
agent = Agent('google-vertex:gemini-2.0-flash')

# Others: groq:, mistral:, cohere:, bedrock:, etc.

Structured Outputs

Use Pydantic models for validated, typed responses:

from pydantic import BaseModel
from pydantic_ai import Agent

class CityInfo(BaseModel):
    city: str
    country: str
    population: int

agent = Agent('openai:gpt-4o', output_type=CityInfo)
result = agent.run_sync('Tell me about Paris')
print(result.output.city)  # "Paris"
print(result.output.population)  # int, validated

Agent Configuration

agent = Agent(
    'openai:gpt-4o',
    output_type=MyOutput,           # Structured output type
    deps_type=MyDeps,               # Dependency injection type
    instructions='You are helpful.',  # Static instructions
    retries=2,                      # Retry attempts for validation
    name='my-agent',                # For logging/tracing
    model_settings=ModelSettings(   # Provider settings
        temperature=0.7,
        max_tokens=1000
    ),
    end_strategy='early',           # How to handle tool calls with results
)

Running Agents

Three execution methods:

# Async (preferred)
result = await agent.run('prompt', deps=my_deps)

# Sync (convenience)
result = agent.run_sync('prompt', deps=my_deps)

# Streaming
async with agent.run_stream('prompt') as response:
    async for chunk in response.stream_output():
        print(chunk, end='')

Instructions vs System Prompts

# Instructions: Concatenated, for agent behavior
agent = Agent(
    'openai:gpt-4o',
    instructions='You are a helpful assistant. Be concise.'
)

# Dynamic instructions via decorator
@agent.instructions
def add_context(ctx: RunContext[MyDeps]) -> str:
    return f"User ID: {ctx.deps.user_id}"

# System prompts: Static, for model context
agent = Agent(
    'openai:gpt-4o',
    system_prompt=['You are an expert.', 'Always cite sources.']
)

Common Patterns

Parameterized Agent (Type-Safe)

from dataclasses import dataclass
from pydantic_ai import Agent, RunContext

@dataclass
class Deps:
    api_key: str
    user_id: int

agent: Agent[Deps, str] = Agent(
    'openai:gpt-4o',
    deps_type=Deps,
)

# deps is now required and type-checked
result = agent.run_sync('Hello', deps=Deps(api_key='...', user_id=123))

No Dependencies (Satisfy Type Checker)

# Option 1: Explicit type annotation
agent: Agent[None, str] = Agent('openai:gpt-4o')

# Option 2: Pass deps=None
result = agent.run_sync('Hello', deps=None)

Decision Framework

ScenarioConfiguration
Simple text responsesAgent(model)
Structured data extractionAgent(model, output_type=MyModel)
Need external servicesAdd deps_type=MyDeps
Validation retries neededIncrease retries=3
Debugging/monitoringSet instrument=True

Source

git clone https://github.com/existential-birds/beagle/blob/main/plugins/beagle-ai/skills/pydantic-ai-agent-creation/SKILL.mdView on GitHub

Overview

Create AI agents that return validated, typed outputs using Pydantic models. This skill enables type-safe dependencies, structured responses, and configurable behavior for chat systems or LLM integrations. It covers model selection, output typing, and dependency injection to ensure robust, maintainable AI workflows.

How This Skill Works

Instantiate Agent with an output_type to enforce a Pydantic model, and provide a deps_type for typed dependencies. Configure behavior via instructions, retries, name, model_settings, and end_strategy. Agents support async, sync, and streaming execution, with optional system prompts or instruction prompts to guide behavior.

When to Use It

  • Building AI agents that return validated, typed outputs via Pydantic models
  • Creating chat systems that must extract structured data from LLM responses
  • Integrating LLMs with strict input/output validation and dependency injection
  • Defining parameterized agents with a typed deps_type for external services
  • Debugging, monitoring, or iterating agent behavior with instrument/logging

Quick Start

  1. Step 1: from pydantic_ai import Agent
  2. Step 2: agent = Agent('openai:gpt-4o') # Minimal agent with text output
  3. Step 3: result = agent.run_sync('Hello!'); print(result.output)

Best Practices

  • Define a clear, faithful Pydantic output_type to reflect the expected response
  • Declare a matching deps_type for any external service or data dependencies
  • Use instructions and, when appropriate, system prompts to guide agent behavior
  • Tune model_settings (temperature, max_tokens) to balance creativity and reliability
  • Leverage async run, streaming, and retries to handle long tasks and validation

Example Use Cases

  • Define CityInfo(BaseModel) with city, country, population and extract these fields from a prompt like 'Tell me about Paris'
  • Create a Deps dataclass and use Agent('provider:model', deps_type=Deps) to call an external service with typed dependencies
  • Use a minimal agent for simple text responses without an output_type
  • Select provider models using strings such as 'openai:gpt-4o' or 'anthropic:claude-4-5' for different capabilities
  • Use run_stream to process chunked outputs from a streaming LLM and display in real time

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers