Get the FREE Ultimate OpenClaw Setup Guide →
npx machina-cli add skill Orchestra-Research/AI-Research-SKILLs/modal --openclaw
Files (1)
SKILL.md
8.4 KB

Modal Serverless GPU

Comprehensive guide to running ML workloads on Modal's serverless GPU cloud platform.

When to use Modal

Use Modal when:

  • Running GPU-intensive ML workloads without managing infrastructure
  • Deploying ML models as auto-scaling APIs
  • Running batch processing jobs (training, inference, data processing)
  • Need pay-per-second GPU pricing without idle costs
  • Prototyping ML applications quickly
  • Running scheduled jobs (cron-like workloads)

Key features:

  • Serverless GPUs: T4, L4, A10G, L40S, A100, H100, H200, B200 on-demand
  • Python-native: Define infrastructure in Python code, no YAML
  • Auto-scaling: Scale to zero, scale to 100+ GPUs instantly
  • Sub-second cold starts: Rust-based infrastructure for fast container launches
  • Container caching: Image layers cached for rapid iteration
  • Web endpoints: Deploy functions as REST APIs with zero-downtime updates

Use alternatives instead:

  • RunPod: For longer-running pods with persistent state
  • Lambda Labs: For reserved GPU instances
  • SkyPilot: For multi-cloud orchestration and cost optimization
  • Kubernetes: For complex multi-service architectures

Quick start

Installation

pip install modal
modal setup  # Opens browser for authentication

Hello World with GPU

import modal

app = modal.App("hello-gpu")

@app.function(gpu="T4")
def gpu_info():
    import subprocess
    return subprocess.run(["nvidia-smi"], capture_output=True, text=True).stdout

@app.local_entrypoint()
def main():
    print(gpu_info.remote())

Run: modal run hello_gpu.py

Basic inference endpoint

import modal

app = modal.App("text-generation")
image = modal.Image.debian_slim().pip_install("transformers", "torch", "accelerate")

@app.cls(gpu="A10G", image=image)
class TextGenerator:
    @modal.enter()
    def load_model(self):
        from transformers import pipeline
        self.pipe = pipeline("text-generation", model="gpt2", device=0)

    @modal.method()
    def generate(self, prompt: str) -> str:
        return self.pipe(prompt, max_length=100)[0]["generated_text"]

@app.local_entrypoint()
def main():
    print(TextGenerator().generate.remote("Hello, world"))

Core concepts

Key components

ComponentPurpose
AppContainer for functions and resources
FunctionServerless function with compute specs
ClsClass-based functions with lifecycle hooks
ImageContainer image definition
VolumePersistent storage for models/data
SecretSecure credential storage

Execution modes

CommandDescription
modal run script.pyExecute and exit
modal serve script.pyDevelopment with live reload
modal deploy script.pyPersistent cloud deployment

GPU configuration

Available GPUs

GPUVRAMBest For
T416GBBudget inference, small models
L424GBInference, Ada Lovelace arch
A10G24GBTraining/inference, 3.3x faster than T4
L40S48GBRecommended for inference (best cost/perf)
A100-40GB40GBLarge model training
A100-80GB80GBVery large models
H10080GBFastest, FP8 + Transformer Engine
H200141GBAuto-upgrade from H100, 4.8TB/s bandwidth
B200LatestBlackwell architecture

GPU specification patterns

# Single GPU
@app.function(gpu="A100")

# Specific memory variant
@app.function(gpu="A100-80GB")

# Multiple GPUs (up to 8)
@app.function(gpu="H100:4")

# GPU with fallbacks
@app.function(gpu=["H100", "A100", "L40S"])

# Any available GPU
@app.function(gpu="any")

Container images

# Basic image with pip
image = modal.Image.debian_slim(python_version="3.11").pip_install(
    "torch==2.1.0", "transformers==4.36.0", "accelerate"
)

# From CUDA base
image = modal.Image.from_registry(
    "nvidia/cuda:12.1.0-cudnn8-devel-ubuntu22.04",
    add_python="3.11"
).pip_install("torch", "transformers")

# With system packages
image = modal.Image.debian_slim().apt_install("git", "ffmpeg").pip_install("whisper")

Persistent storage

volume = modal.Volume.from_name("model-cache", create_if_missing=True)

@app.function(gpu="A10G", volumes={"/models": volume})
def load_model():
    import os
    model_path = "/models/llama-7b"
    if not os.path.exists(model_path):
        model = download_model()
        model.save_pretrained(model_path)
        volume.commit()  # Persist changes
    return load_from_path(model_path)

Web endpoints

FastAPI endpoint decorator

@app.function()
@modal.fastapi_endpoint(method="POST")
def predict(text: str) -> dict:
    return {"result": model.predict(text)}

Full ASGI app

from fastapi import FastAPI
web_app = FastAPI()

@web_app.post("/predict")
async def predict(text: str):
    return {"result": await model.predict.remote.aio(text)}

@app.function()
@modal.asgi_app()
def fastapi_app():
    return web_app

Web endpoint types

DecoratorUse Case
@modal.fastapi_endpoint()Simple function → API
@modal.asgi_app()Full FastAPI/Starlette apps
@modal.wsgi_app()Django/Flask apps
@modal.web_server(port)Arbitrary HTTP servers

Dynamic batching

@app.function()
@modal.batched(max_batch_size=32, wait_ms=100)
async def batch_predict(inputs: list[str]) -> list[dict]:
    # Inputs automatically batched
    return model.batch_predict(inputs)

Secrets management

# Create secret
modal secret create huggingface HF_TOKEN=hf_xxx
@app.function(secrets=[modal.Secret.from_name("huggingface")])
def download_model():
    import os
    token = os.environ["HF_TOKEN"]

Scheduling

@app.function(schedule=modal.Cron("0 0 * * *"))  # Daily midnight
def daily_job():
    pass

@app.function(schedule=modal.Period(hours=1))
def hourly_job():
    pass

Performance optimization

Cold start mitigation

@app.function(
    container_idle_timeout=300,  # Keep warm 5 min
    allow_concurrent_inputs=10,  # Handle concurrent requests
)
def inference():
    pass

Model loading best practices

@app.cls(gpu="A100")
class Model:
    @modal.enter()  # Run once at container start
    def load(self):
        self.model = load_model()  # Load during warm-up

    @modal.method()
    def predict(self, x):
        return self.model(x)

Parallel processing

@app.function()
def process_item(item):
    return expensive_computation(item)

@app.function()
def run_parallel():
    items = list(range(1000))
    # Fan out to parallel containers
    results = list(process_item.map(items))
    return results

Common configuration

@app.function(
    gpu="A100",
    memory=32768,              # 32GB RAM
    cpu=4,                     # 4 CPU cores
    timeout=3600,              # 1 hour max
    container_idle_timeout=120,# Keep warm 2 min
    retries=3,                 # Retry on failure
    concurrency_limit=10,      # Max concurrent containers
)
def my_function():
    pass

Debugging

# Test locally
if __name__ == "__main__":
    result = my_function.local()

# View logs
# modal app logs my-app

Common issues

IssueSolution
Cold start latencyIncrease container_idle_timeout, use @modal.enter()
GPU OOMUse larger GPU (A100-80GB), enable gradient checkpointing
Image build failsPin dependency versions, check CUDA compatibility
Timeout errorsIncrease timeout, add checkpointing

References

Resources

Source

git clone https://github.com/Orchestra-Research/AI-Research-SKILLs/blob/main/09-infrastructure/modal/SKILL.mdView on GitHub

Overview

Modal Serverless GPU lets you run ML workloads on GPU-backed compute without managing infrastructure. It supports deploying ML models as auto-scaling APIs, batch processing jobs, and pay-per-second pricing, with fast startup and container caching for rapid iteration.

How This Skill Works

Define infrastructure in Python using App, Function, Image, and Cls, then deploy or run with Modal. The platform provisions GPUs on demand, scales from zero to many GPUs, and delivers web endpoints with zero-downtime updates, powered by sub-second container launches and Python-native definitions (no YAML).

When to Use It

  • Running GPU-intensive ML workloads without managing infrastructure
  • Deploying ML models as auto-scaling APIs
  • Running batch processing jobs (training, inference, data processing)
  • Pay-per-second GPU pricing with no idle costs
  • Prototyping ML applications quickly

Quick Start

  1. Step 1: Install Modal and authenticate: pip install modal; modal setup
  2. Step 2: Create a Python App with a GPU-enabled function (e.g., @app.function(gpu='T4'))
  3. Step 3: Run your script with modal run your_script.py

Best Practices

  • Explicitly specify GPU types (e.g., T4, A100) to match workload needs and control costs
  • Leverage container caching to speed up iterative development cycles
  • Expose functions as REST endpoints for zero-downtime updates and easy integration
  • Prefer Python-native definitions (App, Function, Image, Cls) over YAML-based configs
  • Start with smaller GPUs for prototyping and scale up only when needed

Example Use Cases

  • Hello World with GPU: verify GPU access by running nvidia-smi inside a GPU function
  • Basic inference endpoint: deploy a text-generation API using transformers on a GPU image
  • TextGenerator class example: load a model in a GPU-backed class and serve generate prompts
  • Batch inference job: process large datasets on-demand with automatic scaling
  • Scheduled training/inference: run cron-like jobs for nightly model updates and evaluation

Frequently Asked Questions

Add this skill to your agents

Related Skills

Sponsor this space

Reach thousands of developers