Get the FREE Ultimate OpenClaw Setup Guide →

vaex

npx machina-cli add skill Microck/ordinary-claude-skills/vaex --openclaw
Files (1)
SKILL.md
6.4 KB

Vaex

Overview

Vaex is a high-performance Python library designed for lazy, out-of-core DataFrames to process and visualize tabular datasets that are too large to fit into RAM. Vaex can process over a billion rows per second, enabling interactive data exploration and analysis on datasets with billions of rows.

When to Use This Skill

Use Vaex when:

  • Processing tabular datasets larger than available RAM (gigabytes to terabytes)
  • Performing fast statistical aggregations on massive datasets
  • Creating visualizations and heatmaps of large datasets
  • Building machine learning pipelines on big data
  • Converting between data formats (CSV, HDF5, Arrow, Parquet)
  • Needing lazy evaluation and virtual columns to avoid memory overhead
  • Working with astronomical data, financial time series, or other large-scale scientific datasets

Core Capabilities

Vaex provides six primary capability areas, each documented in detail in the references directory:

1. DataFrames and Data Loading

Load and create Vaex DataFrames from various sources including files (HDF5, CSV, Arrow, Parquet), pandas DataFrames, NumPy arrays, and dictionaries. Reference references/core_dataframes.md for:

  • Opening large files efficiently
  • Converting from pandas/NumPy/Arrow
  • Working with example datasets
  • Understanding DataFrame structure

2. Data Processing and Manipulation

Perform filtering, create virtual columns, use expressions, and aggregate data without loading everything into memory. Reference references/data_processing.md for:

  • Filtering and selections
  • Virtual columns and expressions
  • Groupby operations and aggregations
  • String operations and datetime handling
  • Working with missing data

3. Performance and Optimization

Leverage Vaex's lazy evaluation, caching strategies, and memory-efficient operations. Reference references/performance.md for:

  • Understanding lazy evaluation
  • Using delay=True for batching operations
  • Materializing columns when needed
  • Caching strategies
  • Asynchronous operations

4. Data Visualization

Create interactive visualizations of large datasets including heatmaps, histograms, and scatter plots. Reference references/visualization.md for:

  • Creating 1D and 2D plots
  • Heatmap visualizations
  • Working with selections
  • Customizing plots and subplots

5. Machine Learning Integration

Build ML pipelines with transformers, encoders, and integration with scikit-learn, XGBoost, and other frameworks. Reference references/machine_learning.md for:

  • Feature scaling and encoding
  • PCA and dimensionality reduction
  • K-means clustering
  • Integration with scikit-learn/XGBoost/CatBoost
  • Model serialization and deployment

6. I/O Operations

Efficiently read and write data in various formats with optimal performance. Reference references/io_operations.md for:

  • File format recommendations
  • Export strategies
  • Working with Apache Arrow
  • CSV handling for large files
  • Server and remote data access

Quick Start Pattern

For most Vaex tasks, follow this pattern:

import vaex

# 1. Open or create DataFrame
df = vaex.open('large_file.hdf5')  # or .csv, .arrow, .parquet
# OR
df = vaex.from_pandas(pandas_df)

# 2. Explore the data
print(df)  # Shows first/last rows and column info
df.describe()  # Statistical summary

# 3. Create virtual columns (no memory overhead)
df['new_column'] = df.x ** 2 + df.y

# 4. Filter with selections
df_filtered = df[df.age > 25]

# 5. Compute statistics (fast, lazy evaluation)
mean_val = df.x.mean()
stats = df.groupby('category').agg({'value': 'sum'})

# 6. Visualize
df.plot1d(df.x, limits=[0, 100])
df.plot(df.x, df.y, limits='99.7%')

# 7. Export if needed
df.export_hdf5('output.hdf5')

Working with References

The reference files contain detailed information about each capability area. Load references into context based on the specific task:

  • Basic operations: Start with references/core_dataframes.md and references/data_processing.md
  • Performance issues: Check references/performance.md
  • Visualization tasks: Use references/visualization.md
  • ML pipelines: Reference references/machine_learning.md
  • File I/O: Consult references/io_operations.md

Best Practices

  1. Use HDF5 or Apache Arrow formats for optimal performance with large datasets
  2. Leverage virtual columns instead of materializing data to save memory
  3. Batch operations using delay=True when performing multiple calculations
  4. Export to efficient formats rather than keeping data in CSV
  5. Use expressions for complex calculations without intermediate storage
  6. Profile with df.stat() to understand memory usage and optimize operations

Common Patterns

Pattern: Converting Large CSV to HDF5

import vaex

# Open large CSV (processes in chunks automatically)
df = vaex.from_csv('large_file.csv')

# Export to HDF5 for faster future access
df.export_hdf5('large_file.hdf5')

# Future loads are instant
df = vaex.open('large_file.hdf5')

Pattern: Efficient Aggregations

# Use delay=True to batch multiple operations
mean_x = df.x.mean(delay=True)
std_y = df.y.std(delay=True)
sum_z = df.z.sum(delay=True)

# Execute all at once
results = vaex.execute([mean_x, std_y, sum_z])

Pattern: Virtual Columns for Feature Engineering

# No memory overhead - computed on the fly
df['age_squared'] = df.age ** 2
df['full_name'] = df.first_name + ' ' + df.last_name
df['is_adult'] = df.age >= 18

Resources

This skill includes reference documentation in the references/ directory:

  • core_dataframes.md - DataFrame creation, loading, and basic structure
  • data_processing.md - Filtering, expressions, aggregations, and transformations
  • performance.md - Optimization strategies and lazy evaluation
  • visualization.md - Plotting and interactive visualizations
  • machine_learning.md - ML pipelines and model integration
  • io_operations.md - File formats and data import/export

Source

git clone https://github.com/Microck/ordinary-claude-skills/blob/main/skills_all/claude-scientific-skills/scientific-skills/vaex/SKILL.mdView on GitHub

Overview

Vaex is a high-performance Python library designed for lazy, out-of-core DataFrames to process tabular datasets too large to fit in RAM. It enables interactive exploration, fast aggregations, and ML pipelines on datasets spanning billions of rows, with support for formats like CSV, HDF5, Arrow, and Parquet. The library emphasizes memory efficiency through virtual columns and lazy evaluation for scalable big-data work.

How This Skill Works

Vaex operates on on-disk data with memory-mapped arrays and lazy expressions. Operations are executed only when results are requested; virtual columns allow computed values without materializing data, and batching can be controlled with delay for performance. It provides fast groupbys, statistics, visualizations, and ML-ready transforms directly on datasets that exceed RAM.

When to Use It

  • Processing tabular datasets larger than available RAM (gigabytes to terabytes)
  • Performing fast statistical aggregations on massive datasets
  • Creating visualizations (heatmaps, histograms, scatter plots) for big data
  • Building machine learning pipelines on large datasets that don't fit in memory
  • Converting between data formats (CSV, HDF5, Arrow, Parquet) for scalable workflows

Quick Start

  1. Step 1: import vaex and open a large file, e.g., df = vaex.open('large_file.parquet') or vaex.open('large_file.hdf5')
  2. Step 2: explore the data with df, df.describe(), or print(df) to see shape and columns
  3. Step 3: create a virtual column and perform a fast aggregation, e.g., df['fee_sq'] = df.fee**2; df_filtered = df[df.age > 25]; df_filtered.mean('age')

Best Practices

  • Use virtual columns for on-the-fly computations to avoid memory overhead
  • Leverage delay=True and batching to optimize chained operations
  • Store data in columnar formats (Parquet/Arrow) for faster IO and processing
  • Rely on Vaex aggregations and describe() for statistics instead of loading all rows
  • Enable caching and consider asynchronous operations to improve throughput

Example Use Cases

  • Analyze a Parquet dataset with billions of rows to compute distribution statistics and sample insights
  • Visualize a large astronomical survey by generating heatmaps and 2D histograms without loading all data
  • Convert a multi-GB CSV log file to Parquet efficiently while deriving summary metrics
  • Preprocess a massive financial time-series for ML by scaling features and encoding categories on-disk
  • Build an end-to-end ML pipeline (preprocessing, PCA, clustering) on large datasets with scikit-learn integration

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers