ray-data
Scannednpx machina-cli add skill Orchestra-Research/AI-Research-SKILLs/ray-data --openclawRay Data - Scalable ML Data Processing
Distributed data processing library for ML and AI workloads.
When to use Ray Data
Use Ray Data when:
- Processing large datasets (>100GB) for ML training
- Need distributed data preprocessing across cluster
- Building batch inference pipelines
- Loading multi-modal data (images, audio, video)
- Scaling data processing from laptop to cluster
Key features:
- Streaming execution: Process data larger than memory
- GPU support: Accelerate transforms with GPUs
- Framework integration: PyTorch, TensorFlow, HuggingFace
- Multi-modal: Images, Parquet, CSV, JSON, audio, video
Use alternatives instead:
- Pandas: Small data (<1GB) on single machine
- Dask: Tabular data, SQL-like operations
- Spark: Enterprise ETL, SQL queries
Quick start
Installation
pip install -U 'ray[data]'
Load and transform data
import ray
# Read Parquet files
ds = ray.data.read_parquet("s3://bucket/data/*.parquet")
# Transform data (lazy execution)
ds = ds.map_batches(lambda batch: {"processed": batch["text"].str.lower()})
# Consume data
for batch in ds.iter_batches(batch_size=100):
print(batch)
Integration with Ray Train
import ray
from ray.train import ScalingConfig
from ray.train.torch import TorchTrainer
# Create dataset
train_ds = ray.data.read_parquet("s3://bucket/train/*.parquet")
def train_func(config):
# Access dataset in training
train_ds = ray.train.get_dataset_shard("train")
for epoch in range(10):
for batch in train_ds.iter_batches(batch_size=32):
# Train on batch
pass
# Train with Ray
trainer = TorchTrainer(
train_func,
datasets={"train": train_ds},
scaling_config=ScalingConfig(num_workers=4, use_gpu=True)
)
trainer.fit()
Reading data
From cloud storage
import ray
# Parquet (recommended for ML)
ds = ray.data.read_parquet("s3://bucket/data/*.parquet")
# CSV
ds = ray.data.read_csv("s3://bucket/data/*.csv")
# JSON
ds = ray.data.read_json("gs://bucket/data/*.json")
# Images
ds = ray.data.read_images("s3://bucket/images/")
From Python objects
# From list
ds = ray.data.from_items([{"id": i, "value": i * 2} for i in range(1000)])
# From range
ds = ray.data.range(1000000) # Synthetic data
# From pandas
import pandas as pd
df = pd.DataFrame({"col1": [1, 2, 3], "col2": [4, 5, 6]})
ds = ray.data.from_pandas(df)
Transformations
Map batches (vectorized)
# Batch transformation (fast)
def process_batch(batch):
batch["doubled"] = batch["value"] * 2
return batch
ds = ds.map_batches(process_batch, batch_size=1000)
Row transformations
# Row-by-row (slower)
def process_row(row):
row["squared"] = row["value"] ** 2
return row
ds = ds.map(process_row)
Filter
# Filter rows
ds = ds.filter(lambda row: row["value"] > 100)
Group by and aggregate
# Group by column
ds = ds.groupby("category").count()
# Custom aggregation
ds = ds.groupby("category").map_groups(lambda group: {"sum": group["value"].sum()})
GPU-accelerated transforms
# Use GPU for preprocessing
def preprocess_images_gpu(batch):
import torch
images = torch.tensor(batch["image"]).cuda()
# GPU preprocessing
processed = images * 255
return {"processed": processed.cpu().numpy()}
ds = ds.map_batches(
preprocess_images_gpu,
batch_size=64,
num_gpus=1 # Request GPU
)
Writing data
# Write to Parquet
ds.write_parquet("s3://bucket/output/")
# Write to CSV
ds.write_csv("output/")
# Write to JSON
ds.write_json("output/")
Performance optimization
Repartition
# Control parallelism
ds = ds.repartition(100) # 100 blocks for 100-core cluster
Batch size tuning
# Larger batches = faster vectorized ops
ds.map_batches(process_fn, batch_size=10000) # vs batch_size=100
Streaming execution
# Process data larger than memory
ds = ray.data.read_parquet("s3://huge-dataset/")
for batch in ds.iter_batches(batch_size=1000):
process(batch) # Streamed, not loaded to memory
Common patterns
Batch inference
import ray
# Load model
def load_model():
# Load once per worker
return MyModel()
# Inference function
class BatchInference:
def __init__(self):
self.model = load_model()
def __call__(self, batch):
predictions = self.model(batch["input"])
return {"prediction": predictions}
# Run distributed inference
ds = ray.data.read_parquet("s3://data/")
predictions = ds.map_batches(BatchInference, batch_size=32, num_gpus=1)
predictions.write_parquet("s3://output/")
Data preprocessing pipeline
# Multi-step pipeline
ds = (
ray.data.read_parquet("s3://raw/")
.map_batches(clean_data)
.map_batches(tokenize)
.map_batches(augment)
.write_parquet("s3://processed/")
)
Integration with ML frameworks
PyTorch
# Convert to PyTorch
torch_ds = ds.to_torch(label_column="label", batch_size=32)
for batch in torch_ds:
# batch is dict with tensors
inputs, labels = batch["features"], batch["label"]
TensorFlow
# Convert to TensorFlow
tf_ds = ds.to_tf(feature_columns=["image"], label_column="label", batch_size=32)
for features, labels in tf_ds:
# Train model
pass
Supported data formats
| Format | Read | Write | Use Case |
|---|---|---|---|
| Parquet | ✅ | ✅ | ML data (recommended) |
| CSV | ✅ | ✅ | Tabular data |
| JSON | ✅ | ✅ | Semi-structured |
| Images | ✅ | ❌ | Computer vision |
| NumPy | ✅ | ✅ | Arrays |
| Pandas | ✅ | ❌ | DataFrames |
Performance benchmarks
Scaling (processing 100GB data):
- 1 node (16 cores): ~30 minutes
- 4 nodes (64 cores): ~8 minutes
- 16 nodes (256 cores): ~2 minutes
GPU acceleration (image preprocessing):
- CPU only: 1,000 images/sec
- 1 GPU: 5,000 images/sec
- 4 GPUs: 18,000 images/sec
Use cases
Production deployments:
- Pinterest: Last-mile data processing for model training
- ByteDance: Scaling offline inference with multi-modal LLMs
- Spotify: ML platform for batch inference
References
- Transformations Guide - Map, filter, groupby operations
- Integration Guide - Ray Train, PyTorch, TensorFlow
Resources
- Docs: https://docs.ray.io/en/latest/data/data.html
- GitHub: https://github.com/ray-project/ray ⭐ 36,000+
- Version: Ray 2.40.0+
- Examples: https://docs.ray.io/en/latest/data/examples/overview.html
Source
git clone https://github.com/Orchestra-Research/AI-Research-SKILLs/blob/main/05-data-processing/ray-data/SKILL.mdView on GitHub Overview
Ray Data is a distributed data processing library for ML and AI workloads. It enables streaming execution across CPU and GPU, supports Parquet, CSV, JSON, and images, and integrates with Ray Train, PyTorch, and TensorFlow. It scales from a single machine to hundreds of nodes for batch inference, preprocessing, multi-modal loading, and distributed ETL.
How This Skill Works
Ray Data builds lazy, distributed data pipelines that read from cloud storage (Parquet/CSV/JSON/images) and apply transformations with map_batches and other ops. It can accelerate transforms with GPUs and exposes data to Ray Train workflows, so pipelines run across clusters and support streaming-style processing of datasets larger than memory.
When to Use It
- Processing large datasets (>100GB) for ML training
- Distributed data preprocessing across a cluster
- Building batch inference pipelines
- Loading multi-modal data (images, audio, video)
- Scaling data processing from a laptop to a cluster
Quick Start
- Step 1: Install Ray Data dependencies with pip install -U 'ray[data]'
- Step 2: Load and transform data using ds = ray.data.read_parquet('s3://bucket/data/*.parquet'); ds = ds.map_batches(lambda batch: {'processed': batch['text'].lower()}); for batch in ds.iter_batches(batch_size=100): print(batch)
- Step 3: Integrate with training by using a TorchTrainer with datasets and trainer.fit()
Best Practices
- Prefer Parquet for ML workloads due to columnar access
- Use ds.map_batches for vectorized transformations
- Leverage GPU-accelerated transforms when processing heavy image/audio data
- Integrate with Ray Train by exposing datasets and using ScalingConfig
- Read from appropriate sources with read_parquet/read_csv/read_images to match your data
Example Use Cases
- Build a batch inference pipeline by reading Parquet data and applying map_batches for preprocessing
- Execute a distributed ETL pipeline across cloud storage with read_parquet and write targets
- Load multi-modal data (images, audio, video) for a single training job
- Train a PyTorch or TensorFlow model using TorchTrainer with a Ray dataset
- Scale from a laptop to a cluster by distributing data loading and transforms
Frequently Asked Questions
Related Skills
tensorboard
Orchestra-Research/AI-Research-SKILLs
Visualize training metrics, debug models with histograms, compare experiments, visualize model graphs, and profile performance with TensorBoard - Google's ML visualization toolkit
huggingface-accelerate
Orchestra-Research/AI-Research-SKILLs
Simplest distributed training API. 4 lines to add distributed support to any PyTorch script. Unified API for DeepSpeed/FSDP/Megatron/DDP. Automatic device placement, mixed precision (FP16/BF16/FP8). Interactive config, single launch command. HuggingFace ecosystem standard.
optimizing-attention-flash
Orchestra-Research/AI-Research-SKILLs
Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with long sequences (>512 tokens), encountering GPU memory issues with attention, or need faster inference. Supports PyTorch native SDPA, flash-attn library, H100 FP8, and sliding window attention.
nemo-curator
Orchestra-Research/AI-Research-SKILLs
GPU-accelerated data curation for LLM training. Supports text/image/video/audio. Features fuzzy deduplication (16× faster), quality filtering (30+ heuristics), semantic deduplication, PII redaction, NSFW detection. Scales across GPUs with RAPIDS. Use for preparing high-quality training datasets, cleaning web data, or deduplicating large corpora.
pytorch-fsdp2
Orchestra-Research/AI-Research-SKILLs
Adds PyTorch FSDP2 (fully_shard) to training scripts with correct init, sharding, mixed precision/offload config, and distributed checkpointing. Use when models exceed single-GPU memory or when you need DTensor-based sharding with DeviceMesh.
ray-train
Orchestra-Research/AI-Research-SKILLs
Distributed training orchestration across clusters. Scales PyTorch/TensorFlow/HuggingFace from laptop to 1000s of nodes. Built-in hyperparameter tuning with Ray Tune, fault tolerance, elastic scaling. Use when training massive models across multiple machines or running distributed hyperparameter sweeps.