Get the FREE Ultimate OpenClaw Setup Guide →

Images

Scanned
npx machina-cli add skill samarth777/modal-skills/images --openclaw
Files (1)
SKILL.md
4.1 KB

Modal Images Reference

This document provides detailed reference for building container images in Modal.

Base Images

debian_slim

Minimal Debian-based image, recommended for most use cases.

image = modal.Image.debian_slim(python_version="3.12")

micromamba

For conda/mamba package management.

image = modal.Image.micromamba().micromamba_install(
    "pytorch", "cudatoolkit=11.8",
    channels=["pytorch", "conda-forge"]
)

from_registry

Use any public Docker image.

image = modal.Image.from_registry("pytorch/pytorch:2.0.0-cuda11.7-cudnn8-runtime")

# Add Python to images without it
image = modal.Image.from_registry("ubuntu:22.04", add_python="3.12")

from_dockerfile

Build from existing Dockerfile.

image = modal.Image.from_dockerfile("./Dockerfile")

from_aws_ecr

Pull from private AWS ECR.

aws_secret = modal.Secret.from_name("my-aws-secret")
image = modal.Image.from_aws_ecr(
    "123456789.dkr.ecr.us-east-1.amazonaws.com/my-repo:latest",
    secret=aws_secret,
)

Image Methods

Package Installation

image = (
    modal.Image.debian_slim()
    # Python packages
    .pip_install("torch", "transformers")           # Standard pip
    .uv_pip_install("numpy", "pandas")              # Fast uv installer
    .pip_install("flash-attn", gpu="H100")          # With GPU access
    
    # System packages
    .apt_install("git", "ffmpeg", "curl")
    
    # Conda packages
    # (only with micromamba base)
    .micromamba_install("pytorch", channels=["pytorch"])
)

Adding Local Files

image = (
    modal.Image.debian_slim()
    # Add local directory
    .add_local_dir("./config", remote_path="/app/config")
    
    # Add single file
    .add_local_file("./model.py", remote_path="/app/model.py")
    
    # Add Python module (for imports)
    .add_local_python_source("my_module")
)

Environment Configuration

image = (
    modal.Image.debian_slim()
    # Environment variables
    .env({"MY_VAR": "value", "DEBUG": "true"})
    
    # Working directory
    .workdir("/app")
    
    # Custom entrypoint
    .entrypoint(["/usr/bin/my_entrypoint.sh"])
)

Running Commands

image = (
    modal.Image.debian_slim()
    # Shell commands
    .run_commands(
        "git clone https://github.com/user/repo",
        "cd repo && pip install -e ."
    )
    
    # Python function during build
    .run_function(download_models, secrets=[hf_secret])
)

Build Optimization

Layer Ordering

Order layers from least to most frequently changed:

image = (
    modal.Image.debian_slim()
    .apt_install("ffmpeg")              # 1. System packages (rarely change)
    .pip_install("torch==2.1.0")        # 2. Large dependencies (pinned)
    .pip_install("transformers")        # 3. Application dependencies
    .add_local_python_source("app")     # 4. Application code (changes often)
)

Force Rebuild

# Force specific layer to rebuild
image = (
    modal.Image.debian_slim()
    .pip_install("my-package", force_build=True)
)
# Force all images to rebuild
MODAL_FORCE_BUILD=1 modal run app.py

# Ignore cache without breaking it
MODAL_IGNORE_CACHE=1 modal run app.py

GPU During Build

# Some packages need GPU access during installation
image = (
    modal.Image.debian_slim()
    .pip_install("bitsandbytes", gpu="H100")
)

CUDA Images

For libraries requiring full CUDA toolkit:

cuda_version = "12.8.1"
os_version = "ubuntu24.04"

image = (
    modal.Image.from_registry(
        f"nvidia/cuda:{cuda_version}-devel-{os_version}",
        add_python="3.12"
    )
    .entrypoint([])  # Remove base image entrypoint
    .pip_install("torch", "flash-attn")
)

Image Imports

Handle packages that only exist in the container:

image = modal.Image.debian_slim().pip_install("torch", "transformers")

# Imports only happen when container runs
with image.imports():
    import torch
    from transformers import pipeline

@app.function(image=image)
def my_function():
    # torch and pipeline are available here
    model = pipeline("text-generation")
    ...

Source

git clone https://github.com/samarth777/modal-skills/blob/main/skills/images/SKILL.mdView on GitHub

Overview

Modal Images provides a reference for building container images within Modal. It covers base images (debian_slim, micromamba, from_registry, from_dockerfile, from_aws_ecr), common image methods (install packages, add local files, configure environment, and run commands), and build-optimization patterns to speed up caching and support GPU-enabled builds.

How This Skill Works

Start from a base image and chain image methods (pip_install, uv_pip_install, apt_install, micromamba_install, add_local_dir, add_local_file, add_local_python_source, env, workdir, entrypoint, run_commands, run_function) to assemble the final image. It also supports building from_registry or from_dockerfile sources, with guidance on layer ordering and GPU usage to optimize performance.

When to Use It

  • You need a minimal Debian-based image for Python apps (debian_slim).
  • You want a conda/micromamba environment to manage packages.
  • You prefer reusing an existing public Docker image via from_registry and optionally adding Python.
  • You need to include local config, model files, or Python modules into the image using add_local_dir/add_local_file/add_local_python_source.
  • You require CUDA-enabled images or private AWS ECR access (from_aws_ecr, CUDA images).

Quick Start

  1. Step 1: Pick a base image, e.g., image = modal.Image.debian_slim().
  2. Step 2: Install packages and add assets using image.pip_install / image.apt_install / image.add_local_dir / image.add_local_file / image.add_local_python_source as needed.
  3. Step 3: Configure environment, entrypoint, and any run commands, then build or run your app with the configured image.

Best Practices

  • Choose the right base image first: debian_slim for most tasks; switch to micromamba when you need conda environments.
  • Group installations by type (system packages, Python packages, application code) to improve cache hits.
  • Order layers from least to most frequently changed to maximize Docker layer caching.
  • Use add_local_dir, add_local_file, and add_local_python_source to bring in configs, models, and modules cleanly.
  • Leverage GPU during build when needed and use forced rebuild or cache controls to refresh specific layers.

Example Use Cases

  • Debian slim image with Python 3.12 via image = modal.Image.debian_slim(python_version='3.12').
  • Micromamba-based PyTorch setup: image = modal.Image.micromamba().micromamba_install('pytorch', 'cudatoolkit=11.8', channels=['pytorch','conda-forge']).
  • From a public registry: image = modal.Image.from_registry('pytorch/pytorch:2.0.0-cuda11.7-cudnn8-runtime').
  • Add local assets: image = modal.Image.debian_slim().add_local_dir('./config', remote_path='/app/config').add_local_file('./model.py', remote_path='/app/model.py').add_local_python_source('my_module').
  • CUDA-enabled image: image = modal.Image.from_registry('nvidia/cuda:12.8.1-devel-ubuntu24.04', add_python='3.12').entrypoint([]).pip_install('torch','flash-attn').

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers