Get the FREE Ultimate OpenClaw Setup Guide →

deploy-model

npx machina-cli add skill microsoft/GitHub-Copilot-for-Azure/deploy-model --openclaw
Files (1)
SKILL.md
6.5 KB

Deploy Model

Unified entry point for all Azure OpenAI model deployment workflows. Analyzes user intent and routes to the appropriate deployment mode.

Quick Reference

ModeWhen to UseSub-Skill
PresetQuick deployment, no customization neededpreset/SKILL.md
CustomizeFull control: version, SKU, capacity, RAI policycustomize/SKILL.md
Capacity DiscoveryFind where you can deploy with specific capacitycapacity/SKILL.md

Intent Detection

Analyze the user's prompt and route to the correct mode:

User Prompt
    │
    ├─ Simple deployment (no modifiers)
    │  "deploy gpt-4o", "set up a model"
    │  └─> PRESET mode
    │
    ├─ Customization keywords present
    │  "custom settings", "choose version", "select SKU",
    │  "set capacity to X", "configure content filter",
    │  "PTU deployment", "with specific quota"
    │  └─> CUSTOMIZE mode
    │
    ├─ Capacity/availability query
    │  "find where I can deploy", "check capacity",
    │  "which region has X capacity", "best region for 10K TPM",
    │  "where is this model available"
    │  └─> CAPACITY DISCOVERY mode
    │
    └─ Ambiguous (has capacity target + deploy intent)
       "deploy gpt-4o with 10K capacity to best region"
       └─> CAPACITY DISCOVERY first → then PRESET or CUSTOMIZE

Routing Rules

Signal in PromptRoute ToReason
Just model name, no optionsPresetUser wants quick deployment
"custom", "configure", "choose", "select"CustomizeUser wants control
"find", "check", "where", "which region", "available"CapacityUser wants discovery
Specific capacity number + "best region"Capacity → PresetDiscover then deploy quickly
Specific capacity number + "custom" keywordsCapacity → CustomizeDiscover then deploy with options
"PTU", "provisioned throughput"CustomizePTU requires SKU selection
"optimal region", "best region" (no capacity target)PresetRegion optimization is preset's specialty

Multi-Mode Chaining

Some prompts require two modes in sequence:

Pattern: Capacity → Deploy When a user specifies a capacity requirement AND wants deployment:

  1. Run Capacity Discovery to find regions/projects with sufficient quota
  2. Present findings to user
  3. Ask: "Would you like to deploy with quick defaults or customize settings?"
  4. Route to Preset or Customize based on answer

💡 Tip: If unsure which mode the user wants, default to Preset (quick deployment). Users who want customization will typically use explicit keywords like "custom", "configure", or "with specific settings".

Project Selection (All Modes)

Before any deployment, resolve which project to deploy to. This applies to all modes (preset, customize, and after capacity discovery).

Resolution Order

  1. Check PROJECT_RESOURCE_ID env var — if set, use it as the default
  2. Check user prompt — if user named a specific project or region, use that
  3. If neither — query the user's projects and suggest the current one

Confirmation Step (Required)

Always confirm the target before deploying. Show the user what will be used and give them a chance to change it:

Deploying to:
  Project:  <project-name>
  Region:   <region>
  Resource: <resource-group>

Is this correct? Or choose a different project:
  1. ✅ Yes, deploy here (default)
  2. 📋 Show me other projects in this region
  3. 🌍 Choose a different region

If user picks option 2, show top 5 projects in that region:

Projects in <region>:
  1. project-alpha (rg-alpha)
  2. project-beta (rg-beta)
  3. project-gamma (rg-gamma)
  ...

⚠️ Never deploy without showing the user which project will be used. This prevents accidental deployments to the wrong resource.

Pre-Deployment Validation (All Modes)

Before presenting any deployment options (SKU, capacity), always validate both of these:

  1. Model supports the SKU — query the model catalog to confirm the selected model+version supports the target SKU:

    az cognitiveservices model list --location <region> --subscription <sub-id> -o json
    

    Filter for the model, extract .model.skus[].name to get supported SKUs.

  2. Subscription has available quota — check that the user's subscription has unallocated quota for the SKU+model combination:

    az cognitiveservices usage list --location <region> --subscription <sub-id> -o json
    

    Match by usage name pattern OpenAI.<SKU>.<model-name> (e.g., OpenAI.GlobalStandard.gpt-4o). Compute available = limit - currentValue.

⚠️ Warning: Only present options that pass both checks. Do NOT show hardcoded SKU lists — always query dynamically. SKUs with 0 available quota should be shown as ❌ informational items, not selectable options.

💡 Quota management: For quota increase requests, usage monitoring, and troubleshooting quota errors, defer to the quota skill instead of duplicating that guidance inline.

Prerequisites

All deployment modes require:

  • Azure CLI installed and authenticated (az login)
  • Active Azure subscription with deployment permissions
  • Azure AI Foundry project resource ID (or agent will help discover it via PROJECT_RESOURCE_ID env var)

Sub-Skills

Source

git clone https://github.com/microsoft/GitHub-Copilot-for-Azure/blob/main/plugin/skills/microsoft-foundry/models/deploy-model/SKILL.mdView on GitHub

Overview

Deploy Azure OpenAI models through a single entry point that detects user intent and routes to Preset, Customize, or Capacity Discovery. It streamlines deployment while handling capacity checks across regions and projects.

How This Skill Works

The skill analyzes the user prompt to detect intent, then routes to the appropriate deployment mode (Preset, Customize, or Capacity Discovery). It supports multi mode chaining and enforces project resolution before deployment to ensure deployments occur in the correct context.

When to Use It

  • Just model name with no options to deploy quickly
  • Prompts containing customize or configure keywords to use the Customize mode
  • Capacity or availability queries to find where a model can be deployed
  • Specific capacity and best region requests to deploy with capacity awareness
  • Ambiguous prompts with capacity targets plus deploy intent, triggering Capacity Discovery first

Quick Start

  1. Step 1: Provide a deployment prompt that describes if you want quick deployment or customize options
  2. Step 2: Let deploy-model analyze intent and route to Preset, Customize, or Capacity Discovery
  3. Step 3: Confirm the target project and proceed with deployment

Best Practices

  • Use clear intent signals in user prompts to improve routing accuracy
  • Prefer Capacity Discovery when capacity is a concern before deploying
  • Always resolve the target project before starting deployment
  • Test both Preset and Customize paths to verify routing decisions
  • Do not use this skill to list or delete deployments; use MCP tools for those actions

Example Use Cases

  • deploy gpt-4o quickly using Preset mode
  • deploy a customized model with specific version SKU capacity and PTU policy
  • find available regions and capacity for a given model
  • determine the best region for a 10K TPM deployment
  • deploy to a specified project after completing capacity discovery

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers