Get the FREE Ultimate OpenClaw Setup Guide →

simulation-orchestrator

npx machina-cli add skill HeshamFS/materials-simulation-skills/simulation-orchestrator --openclaw
Files (1)
SKILL.md
7.5 KB

Simulation Orchestrator

Goal

Provide tools to manage multi-simulation campaigns: generate parameter sweeps, track job execution status, and aggregate results from completed runs.

Requirements

  • Python 3.10+
  • No external dependencies (uses Python standard library only)
  • Works on Linux, macOS, and Windows

Inputs to Gather

Before running orchestration scripts, collect from the user:

InputDescriptionExample
Base configTemplate simulation configurationbase_config.json
Parameter rangesParameters to sweep with boundsdt:[1e-4,1e-2],kappa:[0.1,1.0]
Sweep methodHow to sample parameter spacegrid, lhs, linspace
Output directoryWhere to store campaign files./campaign_001
Simulation commandCommand to run each simulationpython sim.py --config {config}

Decision Guidance

Choosing a Sweep Method

Need every combination (full factorial)?
├── YES → Use grid (warning: exponential growth with parameters)
└── NO → Is space-filling coverage needed?
    ├── YES → Use lhs (Latin Hypercube Sampling)
    └── NO → Use linspace for uniform sampling per parameter
MethodBest ForSample Count
gridLow dimensions (1-3), need exact cornersn^d (exponential)
linspace1D sweeps, uniform spacingn per parameter
lhsHigh dimensions, space-fillinguser-specified budget

Campaign Size Guidelines

ParametersGrid Points EachTotal RunsRecommendation
11010Grid is fine
210100Grid acceptable
3101,000Consider LHS
4+1010,000+Use LHS or DOE

Script Outputs (JSON Fields)

ScriptOutput Fields
scripts/sweep_generator.pyconfigs, parameter_space, sweep_method, total_runs
scripts/campaign_manager.pycampaign_id, status, jobs, progress
scripts/job_tracker.pyjob_id, status, start_time, end_time, exit_code
scripts/result_aggregator.pysummary, statistics, best_run, failed_runs

Workflow

Step 1: Generate Parameter Sweep

Create configurations for all parameter combinations:

python3 scripts/sweep_generator.py \
    --base-config base_config.json \
    --params "dt:1e-4:1e-2:5,kappa:0.1:1.0:3" \
    --method linspace \
    --output-dir ./campaign_001 \
    --json

Step 2: Initialize Campaign

Create campaign tracking structure:

python3 scripts/campaign_manager.py \
    --action init \
    --config-dir ./campaign_001 \
    --command "python sim.py --config {config}" \
    --json

Step 3: Track Job Status

Monitor running jobs:

python3 scripts/job_tracker.py \
    --campaign-dir ./campaign_001 \
    --update \
    --json

Step 4: Aggregate Results

Combine results from completed runs:

python3 scripts/result_aggregator.py \
    --campaign-dir ./campaign_001 \
    --metric objective_value \
    --json

CLI Examples

# Generate 5x3=15 runs varying dt (5 values) and kappa (3 values)
python3 scripts/sweep_generator.py \
    --base-config sim.json \
    --params "dt:1e-4:1e-2:5,kappa:0.1:1.0:3" \
    --method linspace \
    --output-dir ./sweep_001 \
    --json

# Generate LHS samples for 4 parameters with budget of 20 runs
python3 scripts/sweep_generator.py \
    --base-config sim.json \
    --params "dt:1e-4:1e-2,kappa:0.1:1.0,M:1e-6:1e-4,W:0.5:2.0" \
    --method lhs \
    --samples 20 \
    --output-dir ./lhs_001 \
    --json

# Check campaign status
python3 scripts/campaign_manager.py \
    --action status \
    --config-dir ./sweep_001 \
    --json

# Get summary statistics from completed runs
python3 scripts/result_aggregator.py \
    --campaign-dir ./sweep_001 \
    --metric final_energy \
    --json

Conversational Workflow Example

User: I want to run a parameter sweep on dt and kappa for my phase-field simulation. I want to try 5 values of dt between 1e-4 and 1e-2, and 4 values of kappa between 0.1 and 1.0.

Agent workflow:

  1. Calculate total runs: 5 x 4 = 20 runs
  2. Generate sweep configurations:
    python3 scripts/sweep_generator.py \
        --base-config simulation.json \
        --params "dt:1e-4:1e-2:5,kappa:0.1:1.0:4" \
        --method linspace \
        --output-dir ./dt_kappa_sweep \
        --json
    
  3. Initialize campaign:
    python3 scripts/campaign_manager.py \
        --action init \
        --config-dir ./dt_kappa_sweep \
        --command "python phase_field.py --config {config}" \
        --json
    
  4. After user runs simulations, aggregate results:
    python3 scripts/result_aggregator.py \
        --campaign-dir ./dt_kappa_sweep \
        --metric interface_width \
        --json
    

Error Handling

ErrorCauseResolution
Base config not foundInvalid file pathVerify base config file exists
Invalid parameter formatMalformed param stringUse format name:min:max:count or name:min:max
Output directory existsWould overwriteUse --force or choose new directory
No completed jobsNo results to aggregateWait for jobs to complete or check for failures
Metric not foundResult files missing fieldVerify metric name in result JSON

Integration with Other Skills

The simulation-orchestrator works with other simulation-workflow skills:

parameter-optimization          simulation-orchestrator
        │                              │
        │ DOE samples ────────────────>│ Generate configs
        │                              │
        │                              │ Run simulations
        │                              │
        │<──────────────────────────── │ Aggregate results
        │                              │
        │ Sensitivity analysis         │
        │ Optimizer selection          │

Typical Combined Workflow

  1. Use parameter-optimization/doe_generator.py to get sample points
  2. Use simulation-orchestrator/sweep_generator.py to create configs
  3. Run simulations (user's responsibility)
  4. Use simulation-orchestrator/result_aggregator.py to collect results
  5. Use parameter-optimization/sensitivity_summary.py to analyze

Limitations

  • Not a job scheduler: Does not submit jobs to SLURM/PBS; generates configs and tracks status
  • No parallel execution: User must run simulations externally (can use GNU parallel, SLURM, etc.)
  • File-based tracking: Status tracked via files; no database or real-time monitoring
  • Local filesystem: Assumes all files accessible from local machine

References

  • references/campaign_patterns.md - Common campaign structures
  • references/sweep_strategies.md - Parameter sweep design guidance
  • references/aggregation_methods.md - Result aggregation techniques

Version History

  • v1.0.0 (2024-12-24): Initial release with sweep, campaign, tracking, and aggregation

Source

git clone https://github.com/HeshamFS/materials-simulation-skills/blob/main/skills/simulation-workflow/simulation-orchestrator/SKILL.mdView on GitHub

Overview

Simulation Orchestrator provides tools to manage multi-simulation campaigns: generate parameter sweeps, track job execution status, and aggregate results from completed runs. It’s a lightweight, Python 3.10+ workflow that uses only the standard library to run on Linux, macOS, and Windows.

How This Skill Works

The tool generates configurations via sweep_generator.py from a base_config.json and parameter ranges, supporting grid, linspace, and lhs sweeps. Then campaign_manager.py initializes and tracks the campaign, while job_tracker.py updates per-job status and result_aggregator.py produces summaries, statistics, and best runs after completion.

When to Use It

  • You need to sweep parameters across a space (e.g., dt and kappa) and automatically generate configurations.
  • You want to run a batch of simulations in a campaign directory and monitor progress.
  • You require end-to-end tracking of each run’s status from start to finish.
  • You need to aggregate results across runs into a concise summary with statistics.
  • You aim to automate the workflow from sweep generation through result aggregation using a cross-platform setup with no external dependencies.

Quick Start

  1. Step 1: Prepare a base_config.json and define parameter ranges, e.g., dt:1e-4:1e-2:5, kappa:0.1:1.0:3.
  2. Step 2: Generate the sweep: python3 scripts/sweep_generator.py --base-config base_config.json --params "dt:1e-4:1e-2:5,kappa:0.1:1.0:3" --method linspace --output-dir ./campaign_001 --json
  3. Step 3: Initialize and start tracking: python3 scripts/campaign_manager.py --action init --config-dir ./campaign_001 --command "python sim.py --config {config}" --json, then monitor with job_tracker.py and aggregate with result_aggregator.py as needed.

Best Practices

  • Define a stable base_config.json and drive all sweeps through a separate parameter_space description and sweep method.
  • Validate parameter ranges and sample counts to avoid explosion, especially when using grid sweeps.
  • Choose sweep method based on dimensionality: grid for low-dim exact corners, lhs for high-dim space-filling, linspace for 1D uniform sweeps.
  • Keep outputs organized by a campaign directory and document the mapping between runs and their configs.
  • Rely on Python 3.10+ with the standard library to maximize portability across Linux, macOS, and Windows.

Example Use Cases

  • Parameter sweep for a diffusion simulation varying dt and kappa using linspace or grid.
  • DOE-style batch campaign for a fluid dynamics model with several parameters using lhs or grid.
  • Climate model batch runs with varied initial conditions and physical parameters.
  • Electronic circuit stress testing with different temperatures and resistances.
  • Pharmacokinetic simulations across multiple dose levels and clearance rates.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers