parameter-optimization
npx machina-cli add skill HeshamFS/materials-simulation-skills/parameter-optimization --openclawParameter Optimization
Goal
Provide a workflow to design experiments, rank parameter influence, and select optimization strategies for materials simulation calibration.
Requirements
- Python 3.8+
- No external dependencies (uses Python standard library only)
Inputs to Gather
Before running any scripts, collect from the user:
| Input | Description | Example |
|---|---|---|
| Parameter bounds | Min/max for each parameter with units | kappa: [0.1, 10.0] W/mK |
| Evaluation budget | Max number of simulations allowed | 50 runs |
| Noise level | Stochasticity of simulation outputs | low, medium, high |
| Constraints | Feasibility rules or forbidden regions | kappa + mobility < 5 |
Decision Guidance
Choosing a DOE Method
Is dimension <= 3 AND full coverage needed?
├── YES → Use factorial
└── NO → Is sensitivity analysis the goal?
├── YES → Use quasi-random (preferred; "sobol" is accepted but deprecated)
└── NO → Use lhs (Latin Hypercube)
| Method | Best For | Avoid When |
|---|---|---|
lhs | General exploration, moderate dimensions (3-20) | Need exact grid coverage |
sobol | Sensitivity analysis, uniform coverage | Very high dimensions (>20) |
factorial | Low dimension (<4), need all corners | High dimension (exponential growth) |
Choosing an Optimizer
Is dimension <= 5 AND budget <= 100?
├── YES → Bayesian Optimization
└── NO → Is dimension <= 20?
├── YES → CMA-ES
└── NO → Random Search with screening
| Noise Level | Recommendation |
|---|---|
| Low | Gradient-based if derivatives available, else Bayesian Optimization |
| Medium | Bayesian Optimization with noise model |
| High | Evolutionary algorithms or robust Bayesian Optimization |
Script Outputs (JSON Fields)
| Script | Output Fields |
|---|---|
scripts/doe_generator.py | samples, method, coverage |
scripts/optimizer_selector.py | recommended, expected_evals, notes |
scripts/sensitivity_summary.py | ranking, notes |
scripts/surrogate_builder.py | model_type, metrics, notes |
Workflow
- Generate DOE with
scripts/doe_generator.py - Run simulations at DOE sample points (user's responsibility)
- Summarize sensitivity with
scripts/sensitivity_summary.py - Choose optimizer using
scripts/optimizer_selector.py - (Optional) Fit surrogate with
scripts/surrogate_builder.py
CLI Examples
# Generate 20 LHS samples for 3 parameters
python3 scripts/doe_generator.py --params 3 --budget 20 --method lhs --json
# Rank parameters by sensitivity scores
python3 scripts/sensitivity_summary.py --scores 0.2,0.5,0.3 --names kappa,mobility,W --json
# Get optimizer recommendation for 3D problem with 50 eval budget
python3 scripts/optimizer_selector.py --dim 3 --budget 50 --noise low --json
# Build surrogate model from simulation data
python3 scripts/surrogate_builder.py --x 0,1,2 --y 10,12,15 --model rbf --json
Conversational Workflow Example
User: I need to calibrate thermal conductivity and diffusivity for my FEM simulation. I can run about 30 simulations.
Agent workflow:
- Identify 2 parameters →
--params 2 - Budget is 30 →
--budget 30 - Use LHS for general exploration:
python3 scripts/doe_generator.py --params 2 --budget 30 --method lhs --json - After user runs simulations and provides outputs, summarize sensitivity:
python3 scripts/sensitivity_summary.py --scores 0.7,0.3 --names conductivity,diffusivity --json - Recommend optimizer:
python3 scripts/optimizer_selector.py --dim 2 --budget 30 --noise low --json
Error Handling
| Error | Cause | Resolution |
|---|---|---|
params must be positive | Zero or negative dimension | Ask user for valid parameter count |
budget must be positive | Zero or negative budget | Ask user for realistic simulation budget |
method must be lhs, sobol, or factorial | Invalid method | Use decision guidance to pick valid method |
scores must be comma-separated | Malformed input | Reformat as 0.1,0.2,0.3 |
Limitations
- Not for real-time optimization: Scripts provide recommendations, not live optimization loops
- Surrogate is a placeholder:
surrogate_builder.pycomputes basic metrics; replace with actual model for production - No automatic simulation execution: User must run simulations externally and provide results
References
references/doe_methods.md- Detailed DOE method comparisonreferences/optimizer_selection.md- Optimizer algorithm detailsreferences/sensitivity_guidelines.md- Sensitivity analysis interpretationreferences/surrogate_guidelines.md- Surrogate model selection
Version History
- v1.1.0 (2024-12-24): Enhanced documentation, decision guidance, conversational examples
- v1.0.0: Initial release with core scripts
Source
git clone https://github.com/HeshamFS/materials-simulation-skills/blob/main/skills/simulation-workflow/parameter-optimization/SKILL.mdView on GitHub Overview
This skill provides a repeatable workflow to design experiments, rank parameter influence, and pick optimization strategies for materials simulations. It supports calibration, uncertainty analysis, and efficient parameter sweeps using DOE, LHS, Sobol analysis, surrogates, and Bayesian optimization.
How This Skill Works
Gather inputs such as parameter bounds, evaluation budget, noise level, and constraints. The workflow selects a DOE method and an optimizer, generates samples with the chosen method (e.g., lhs, sobol, factorial), and runs simulations at those points. It then summarizes sensitivity and, if needed, builds a surrogate to guide further optimization.
When to Use It
- Calibrating material properties (e.g., conductivity, diffusivity) against experimental data
- Conducting uncertainty analyses to rank parameter influence using Sobol or sensitivity summaries
- Performing parameter sweeps within defined bounds to explore performance envelopes
- Designing experiments with LHS for general exploration in moderate-dimensional spaces
- Setting up surrogate modeling or Bayesian optimization for expensive simulations
Quick Start
- Step 1: Define parameter bounds and the evaluation budget (e.g., 3 parameters, 30 runs)
- Step 2: Generate DOE samples with the preferred method, e.g., "python3 scripts/doe_generator.py --params 3 --budget 30 --method lhs --json"
- Step 3: Run simulations, then summarize sensitivity and select an optimizer with the provided scripts
Best Practices
- Collect clear parameter bounds with units and an explicit evaluation budget
- Choose DOE method based on dimension and coverage needs (factorial, sobol, lhs)
- Use sensitivity analysis early to rank parameter importance before full optimization
- Align optimization strategy with noise level and budget (Bayesian optimization for noisy or expensive evaluations)
- Validate surrogate models with holdout data and compare against direct simulations
Example Use Cases
- Calibrating thermal conductivity and diffusivity in FEM models against experiments
- Ranking kappa, mobility, and other parameters to identify dominant drivers in a transport model
- Applying 4-parameter LHS sampling to explore diffusion behavior under uncertainty
- Building an RBF or other surrogate to accelerate expensive material simulations
- Using Bayesian optimization to calibrate a multi-physics model within a fixed budget