Get the FREE Ultimate OpenClaw Setup Guide →

parameter-optimization

npx machina-cli add skill HeshamFS/materials-simulation-skills/parameter-optimization --openclaw
Files (1)
SKILL.md
5.3 KB

Parameter Optimization

Goal

Provide a workflow to design experiments, rank parameter influence, and select optimization strategies for materials simulation calibration.

Requirements

  • Python 3.8+
  • No external dependencies (uses Python standard library only)

Inputs to Gather

Before running any scripts, collect from the user:

InputDescriptionExample
Parameter boundsMin/max for each parameter with unitskappa: [0.1, 10.0] W/mK
Evaluation budgetMax number of simulations allowed50 runs
Noise levelStochasticity of simulation outputslow, medium, high
ConstraintsFeasibility rules or forbidden regionskappa + mobility < 5

Decision Guidance

Choosing a DOE Method

Is dimension <= 3 AND full coverage needed?
├── YES → Use factorial
└── NO → Is sensitivity analysis the goal?
    ├── YES → Use quasi-random (preferred; "sobol" is accepted but deprecated)
    └── NO → Use lhs (Latin Hypercube)
MethodBest ForAvoid When
lhsGeneral exploration, moderate dimensions (3-20)Need exact grid coverage
sobolSensitivity analysis, uniform coverageVery high dimensions (>20)
factorialLow dimension (<4), need all cornersHigh dimension (exponential growth)

Choosing an Optimizer

Is dimension <= 5 AND budget <= 100?
├── YES → Bayesian Optimization
└── NO → Is dimension <= 20?
    ├── YES → CMA-ES
    └── NO → Random Search with screening
Noise LevelRecommendation
LowGradient-based if derivatives available, else Bayesian Optimization
MediumBayesian Optimization with noise model
HighEvolutionary algorithms or robust Bayesian Optimization

Script Outputs (JSON Fields)

ScriptOutput Fields
scripts/doe_generator.pysamples, method, coverage
scripts/optimizer_selector.pyrecommended, expected_evals, notes
scripts/sensitivity_summary.pyranking, notes
scripts/surrogate_builder.pymodel_type, metrics, notes

Workflow

  1. Generate DOE with scripts/doe_generator.py
  2. Run simulations at DOE sample points (user's responsibility)
  3. Summarize sensitivity with scripts/sensitivity_summary.py
  4. Choose optimizer using scripts/optimizer_selector.py
  5. (Optional) Fit surrogate with scripts/surrogate_builder.py

CLI Examples

# Generate 20 LHS samples for 3 parameters
python3 scripts/doe_generator.py --params 3 --budget 20 --method lhs --json

# Rank parameters by sensitivity scores
python3 scripts/sensitivity_summary.py --scores 0.2,0.5,0.3 --names kappa,mobility,W --json

# Get optimizer recommendation for 3D problem with 50 eval budget
python3 scripts/optimizer_selector.py --dim 3 --budget 50 --noise low --json

# Build surrogate model from simulation data
python3 scripts/surrogate_builder.py --x 0,1,2 --y 10,12,15 --model rbf --json

Conversational Workflow Example

User: I need to calibrate thermal conductivity and diffusivity for my FEM simulation. I can run about 30 simulations.

Agent workflow:

  1. Identify 2 parameters → --params 2
  2. Budget is 30 → --budget 30
  3. Use LHS for general exploration:
    python3 scripts/doe_generator.py --params 2 --budget 30 --method lhs --json
    
  4. After user runs simulations and provides outputs, summarize sensitivity:
    python3 scripts/sensitivity_summary.py --scores 0.7,0.3 --names conductivity,diffusivity --json
    
  5. Recommend optimizer:
    python3 scripts/optimizer_selector.py --dim 2 --budget 30 --noise low --json
    

Error Handling

ErrorCauseResolution
params must be positiveZero or negative dimensionAsk user for valid parameter count
budget must be positiveZero or negative budgetAsk user for realistic simulation budget
method must be lhs, sobol, or factorialInvalid methodUse decision guidance to pick valid method
scores must be comma-separatedMalformed inputReformat as 0.1,0.2,0.3

Limitations

  • Not for real-time optimization: Scripts provide recommendations, not live optimization loops
  • Surrogate is a placeholder: surrogate_builder.py computes basic metrics; replace with actual model for production
  • No automatic simulation execution: User must run simulations externally and provide results

References

  • references/doe_methods.md - Detailed DOE method comparison
  • references/optimizer_selection.md - Optimizer algorithm details
  • references/sensitivity_guidelines.md - Sensitivity analysis interpretation
  • references/surrogate_guidelines.md - Surrogate model selection

Version History

  • v1.1.0 (2024-12-24): Enhanced documentation, decision guidance, conversational examples
  • v1.0.0: Initial release with core scripts

Source

git clone https://github.com/HeshamFS/materials-simulation-skills/blob/main/skills/simulation-workflow/parameter-optimization/SKILL.mdView on GitHub

Overview

This skill provides a repeatable workflow to design experiments, rank parameter influence, and pick optimization strategies for materials simulations. It supports calibration, uncertainty analysis, and efficient parameter sweeps using DOE, LHS, Sobol analysis, surrogates, and Bayesian optimization.

How This Skill Works

Gather inputs such as parameter bounds, evaluation budget, noise level, and constraints. The workflow selects a DOE method and an optimizer, generates samples with the chosen method (e.g., lhs, sobol, factorial), and runs simulations at those points. It then summarizes sensitivity and, if needed, builds a surrogate to guide further optimization.

When to Use It

  • Calibrating material properties (e.g., conductivity, diffusivity) against experimental data
  • Conducting uncertainty analyses to rank parameter influence using Sobol or sensitivity summaries
  • Performing parameter sweeps within defined bounds to explore performance envelopes
  • Designing experiments with LHS for general exploration in moderate-dimensional spaces
  • Setting up surrogate modeling or Bayesian optimization for expensive simulations

Quick Start

  1. Step 1: Define parameter bounds and the evaluation budget (e.g., 3 parameters, 30 runs)
  2. Step 2: Generate DOE samples with the preferred method, e.g., "python3 scripts/doe_generator.py --params 3 --budget 30 --method lhs --json"
  3. Step 3: Run simulations, then summarize sensitivity and select an optimizer with the provided scripts

Best Practices

  • Collect clear parameter bounds with units and an explicit evaluation budget
  • Choose DOE method based on dimension and coverage needs (factorial, sobol, lhs)
  • Use sensitivity analysis early to rank parameter importance before full optimization
  • Align optimization strategy with noise level and budget (Bayesian optimization for noisy or expensive evaluations)
  • Validate surrogate models with holdout data and compare against direct simulations

Example Use Cases

  • Calibrating thermal conductivity and diffusivity in FEM models against experiments
  • Ranking kappa, mobility, and other parameters to identify dominant drivers in a transport model
  • Applying 4-parameter LHS sampling to explore diffusion behavior under uncertainty
  • Building an RBF or other surrogate to accelerate expensive material simulations
  • Using Bayesian optimization to calibrate a multi-physics model within a fixed budget

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers