azure-ai-projects-py
npx machina-cli add skill microsoft/skills/azure-ai-projects-py --openclawAzure AI Projects Python SDK (Foundry SDK)
Build AI applications on Microsoft Foundry using the azure-ai-projects SDK.
Installation
pip install azure-ai-projects azure-identity
Environment Variables
AZURE_AI_PROJECT_ENDPOINT="https://<resource>.services.ai.azure.com/api/projects/<project>"
AZURE_AI_MODEL_DEPLOYMENT_NAME="gpt-4o-mini"
Authentication
import os
from azure.identity import DefaultAzureCredential
from azure.ai.projects import AIProjectClient
credential = DefaultAzureCredential()
client = AIProjectClient(
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
credential=credential,
)
Client Operations Overview
| Operation | Access | Purpose |
|---|---|---|
client.agents | .agents.* | Agent CRUD, versions, threads, runs |
client.connections | .connections.* | List/get project connections |
client.deployments | .deployments.* | List model deployments |
client.datasets | .datasets.* | Dataset management |
client.indexes | .indexes.* | Index management |
client.evaluations | .evaluations.* | Run evaluations |
client.red_teams | .red_teams.* | Red team operations |
Two Client Approaches
1. AIProjectClient (Native Foundry)
from azure.ai.projects import AIProjectClient
client = AIProjectClient(
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
credential=DefaultAzureCredential(),
)
# Use Foundry-native operations
agent = client.agents.create_agent(
model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
name="my-agent",
instructions="You are helpful.",
)
2. OpenAI-Compatible Client
# Get OpenAI-compatible client from project
openai_client = client.get_openai_client()
# Use standard OpenAI API
response = openai_client.chat.completions.create(
model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
messages=[{"role": "user", "content": "Hello!"}],
)
Agent Operations
Create Agent (Basic)
agent = client.agents.create_agent(
model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
name="my-agent",
instructions="You are a helpful assistant.",
)
Create Agent with Tools
from azure.ai.agents import CodeInterpreterTool, FileSearchTool
agent = client.agents.create_agent(
model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
name="tool-agent",
instructions="You can execute code and search files.",
tools=[CodeInterpreterTool(), FileSearchTool()],
)
Versioned Agents with PromptAgentDefinition
from azure.ai.projects.models import PromptAgentDefinition
# Create a versioned agent
agent_version = client.agents.create_version(
agent_name="customer-support-agent",
definition=PromptAgentDefinition(
model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
instructions="You are a customer support specialist.",
tools=[], # Add tools as needed
),
version_label="v1.0",
)
See references/agents.md for detailed agent patterns.
Tools Overview
| Tool | Class | Use Case |
|---|---|---|
| Code Interpreter | CodeInterpreterTool | Execute Python, generate files |
| File Search | FileSearchTool | RAG over uploaded documents |
| Bing Grounding | BingGroundingTool | Web search (requires connection) |
| Azure AI Search | AzureAISearchTool | Search your indexes |
| Function Calling | FunctionTool | Call your Python functions |
| OpenAPI | OpenApiTool | Call REST APIs |
| MCP | McpTool | Model Context Protocol servers |
| Memory Search | MemorySearchTool | Search agent memory stores |
| SharePoint | SharepointGroundingTool | Search SharePoint content |
See references/tools.md for all tool patterns.
Thread and Message Flow
# 1. Create thread
thread = client.agents.threads.create()
# 2. Add message
client.agents.messages.create(
thread_id=thread.id,
role="user",
content="What's the weather like?",
)
# 3. Create and process run
run = client.agents.runs.create_and_process(
thread_id=thread.id,
agent_id=agent.id,
)
# 4. Get response
if run.status == "completed":
messages = client.agents.messages.list(thread_id=thread.id)
for msg in messages:
if msg.role == "assistant":
print(msg.content[0].text.value)
Connections
# List all connections
connections = client.connections.list()
for conn in connections:
print(f"{conn.name}: {conn.connection_type}")
# Get specific connection
connection = client.connections.get(connection_name="my-search-connection")
See references/connections.md for connection patterns.
Deployments
# List available model deployments
deployments = client.deployments.list()
for deployment in deployments:
print(f"{deployment.name}: {deployment.model}")
See references/deployments.md for deployment patterns.
Datasets and Indexes
# List datasets
datasets = client.datasets.list()
# List indexes
indexes = client.indexes.list()
See references/datasets-indexes.md for data operations.
Evaluation
# Using OpenAI client for evals
openai_client = client.get_openai_client()
# Create evaluation with built-in evaluators
eval_run = openai_client.evals.runs.create(
eval_id="my-eval",
name="quality-check",
data_source={
"type": "custom",
"item_references": [{"item_id": "test-1"}],
},
testing_criteria=[
{"type": "fluency"},
{"type": "task_adherence"},
],
)
See references/evaluation.md for evaluation patterns.
Async Client
from azure.ai.projects.aio import AIProjectClient
async with AIProjectClient(
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
credential=DefaultAzureCredential(),
) as client:
agent = await client.agents.create_agent(...)
# ... async operations
See references/async-patterns.md for async patterns.
Memory Stores
# Create memory store for agent
memory_store = client.agents.create_memory_store(
name="conversation-memory",
)
# Attach to agent for persistent memory
agent = client.agents.create_agent(
model=os.environ["AZURE_AI_MODEL_DEPLOYMENT_NAME"],
name="memory-agent",
tools=[MemorySearchTool()],
tool_resources={"memory": {"store_ids": [memory_store.id]}},
)
Best Practices
- Use context managers for async client:
async with AIProjectClient(...) as client: - Clean up agents when done:
client.agents.delete_agent(agent.id) - Use
create_and_processfor simple runs, streaming for real-time UX - Use versioned agents for production deployments
- Prefer connections for external service integration (AI Search, Bing, etc.)
SDK Comparison
| Feature | azure-ai-projects | azure-ai-agents |
|---|---|---|
| Level | High-level (Foundry) | Low-level (Agents) |
| Client | AIProjectClient | AgentsClient |
| Versioning | create_version() | Not available |
| Connections | Yes | No |
| Deployments | Yes | No |
| Datasets/Indexes | Yes | No |
| Evaluation | Via OpenAI client | No |
| When to use | Full Foundry integration | Standalone agent apps |
Reference Files
- references/agents.md: Agent operations with PromptAgentDefinition
- references/tools.md: All agent tools with examples
- references/evaluation.md: Evaluation operations overview
- references/built-in-evaluators.md: Complete built-in evaluator reference
- references/custom-evaluators.md: Code and prompt-based evaluator patterns
- references/connections.md: Connection operations
- references/deployments.md: Deployment enumeration
- references/datasets-indexes.md: Dataset and index operations
- references/async-patterns.md: Async client usage
- references/api-reference.md: Complete API reference for all 373 SDK exports (v2.0.0b4)
- scripts/run_batch_evaluation.py: CLI tool for batch evaluations
Source
git clone https://github.com/microsoft/skills/blob/main/.github/plugins/azure-sdk-python/skills/azure-ai-projects-py/SKILL.mdView on GitHub Overview
Build AI applications using the Azure AI Projects Python SDK (azure-ai-projects). This high-level Foundry SDK lets you work with project clients, create versioned agents with PromptAgentDefinition, run evaluations, and manage connections, deployments, datasets, and indexes, or use OpenAI-compatible clients. For low-level agent operations, use azure-ai-agents-python.
How This Skill Works
You interact with the Foundry project via AIProjectClient and its surfaces (agents, connections, deployments, datasets, indexes, evaluations). You can create versioned agents using PromptAgentDefinition, run evaluations to validate behavior, and switch to an OpenAI-compatible client for standard OpenAI-style workflows. The SDK provides a high-level abstraction over common Foundry tasks, while still allowing OpenAI-compatible access when needed.
When to Use It
- When building AI applications that interact with Foundry project clients
- When creating versioned agents with PromptAgentDefinition
- When running evaluations to validate agent performance
- When managing connections, deployments, datasets, or indexes within a project
- When using OpenAI-compatible clients for Azure-hosted workflows
Quick Start
- Step 1: Install packages: pip install azure-ai-projects azure-identity
- Step 2: Set environment variables AZURE_AI_PROJECT_ENDPOINT and AZURE_AI_MODEL_DEPLOYMENT_NAME (e.g., gpt-4o-mini)
- Step 3: Authenticate and instantiate the client, then create a basic agent with client.agents.create_agent
Best Practices
- Use the native AIProjectClient for full Foundry-native operations
- Prefer PromptAgentDefinition when creating versioned agents
- Store endpoint and model deployment details in environment variables (e.g., AZURE_AI_PROJECT_ENDPOINT, AZURE_AI_MODEL_DEPLOYMENT_NAME) and rotate credentials regularly
- Leverage get_openai_client() for OpenAI-compatible workflows to simplify integration
- Apply governance: organize deployments, datasets, and indexes with clear naming and access controls
Example Use Cases
- Create a versioned agent with PromptAgentDefinition and a version_label to track changes
- List deployments for a model and verify availability across environments
- Run an evaluation via client.evaluations to benchmark agent behavior
- Use the OpenAI-compatible client to send a chat completion request against a managed deployment
- Manage project connections and datasets to prepare data for RAG and retrieval tasks