Get the FREE Ultimate OpenClaw Setup Guide →

samarth777/modal-skills Skills

(9)

Browse AI agent skills from samarth777/modal-skills for Claude Code, OpenClaw, Cursor, Windsurf, and more. Install them with a single command to extend what your agents can do.

Code Exec

samarth777/modal-skills

1

# Code Execution Sandbox Example A complete example of a secure code execution service for LLM-generated code. ```python import modal from typing import Optional app = modal.App("code-executor") # --- Sandboxed Execution Image --- sandbox_image = ( modal.Image.debian_slim(python_version="3.1

Data Pipeline

samarth777/modal-skills

1

# Data Processing Pipeline Example A complete example of a scalable data processing pipeline on Modal. ```python import modal from datetime import datetime # --- Image Definition --- image = ( modal.Image.debian_slim(python_version="3.12") .pip_install( "pandas==2.2.0", "p

Gpus

samarth777/modal-skills

1

# Modal GPU Reference Detailed reference for GPU acceleration on Modal. ## Available GPUs | GPU | VRAM | Max Count | Best For | |-----|------|-----------|----------| | `T4` | 16 GB | 8 | Budget inference, light training | | `L4` | 24 GB | 8 | Inference | | `A10` | 24 GB | 4 | Inference, light tra

Image Gen

samarth777/modal-skills

1

# Image Generation Service Example A complete example of deploying a Stable Diffusion image generation service. ```python import modal import io import base64 # --- Configuration --- MODEL_ID = "stabilityai/stable-diffusion-xl-base-1.0" GPU_TYPE = "A10" # --- Image Definition --- image = ( m

Images

samarth777/modal-skills

1

# Modal Images Reference This document provides detailed reference for building container images in Modal. ## Base Images ### `debian_slim` Minimal Debian-based image, recommended for most use cases. ```python image = modal.Image.debian_slim(python_version="3.12") ``` ### `micromamba` For conda

Llm Inference

samarth777/modal-skills

1

# LLM Inference Service Example A complete example of deploying an LLM inference service on Modal using vLLM. ```python import modal # --- Configuration --- MODEL_NAME = "meta-llama/Llama-3.1-8B-Instruct" GPU_TYPE = "A100" # --- Image Definition --- image = ( modal.Image.debian_slim(python_v

modal

samarth777/modal-skills

1

Guide for building serverless Python applications on Modal - a cloud platform for running AI/ML workloads, GPU-accelerated code, web endpoints, scheduled jobs, and batch processing with minimal configuration. Use when deploying Python code to Modal's infrastructure, running GPU inference, creating web APIs, processing data at scale, or building AI applications.

Sandboxes

samarth777/modal-skills

1

# Modal Sandboxes Reference Detailed reference for running arbitrary code in isolated containers. ## Overview Sandboxes are containers you can create, interact with, and terminate at runtime. They're ideal for: - Executing LLM-generated code - Running untrusted user code - Interactive developmen

Web Endpoints

samarth777/modal-skills

1

# Modal Web Endpoints Reference Detailed reference for creating web APIs and endpoints on Modal. ## Endpoint Types ### FastAPI Endpoint (Recommended) Simple function-based endpoints: ```python import modal app = modal.App("web-api") image = modal.Image.debian_slim().pip_install("fastapi[standa

Sponsor this space

Reach thousands of developers