Get the FREE Ultimate OpenClaw Setup Guide →

slurm-job-script-generator

npx machina-cli add skill HeshamFS/materials-simulation-skills/slurm-job-script-generator --openclaw
Files (1)
SKILL.md
5.0 KB

SLURM Job Script Generator

Goal

Generate a correct, copy-pasteable SLURM job script (.sbatch) for running a simulation, and surface common configuration mistakes (bad walltime format, conflicting memory flags, oversubscription hints).

Requirements

  • Python 3.8+
  • No external dependencies (Python standard library only)
  • Works on Linux, macOS, and Windows (script generation only)

Inputs to Gather

InputDescriptionExample
Job nameShort identifier for the jobphasefield-strong-scaling
WalltimeSLURM time limit00:30:00
PartitionCluster partition/queue (if required)compute
AccountProject/account (if required)matsim
NodesNumber of nodes to allocate2
MPI tasksTotal tasks, or tasks per node128 or 64 per node
ThreadsCPUs per task (OpenMP threads)2
Memory--mem or --mem-per-cpu (cluster policy dependent)32G
GPUsGPUs per node (optional)4
Working directoryWhere the run should execute$SLURM_SUBMIT_DIR
ModulesEnvironment modules to load (optional)gcc/12, openmpi/4.1
Run commandThe command to launch under SLURM./simulate --config cfg.json

Decision Guidance

MPI vs MPI+OpenMP layout

Does the code use OpenMP / threading?
├── NO  → Use MPI-only: cpus-per-task=1
└── YES → Use hybrid: set cpus-per-task = threads per MPI rank
          and export OMP_NUM_THREADS = cpus-per-task

Rule of thumb: if you see diminishing strong-scaling efficiency at high MPI ranks, try fewer ranks with more threads per rank (and measure).

Memory flag selection

  • Use either --mem (per node) or --mem-per-cpu (per CPU), not both.
  • Follow your cluster’s documentation; some sites enforce one style.
  • SLURM --mem units are integer MB by default, or an integer with suffix K/M/G/T (and --mem=0 commonly means “all memory on node”).

Script Outputs (JSON Fields)

ScriptKey Outputs
scripts/slurm_script_generator.pyresults.script, results.directives, results.derived, results.warnings

Workflow

  1. Gather cluster constraints (partition/account, GPU policy, memory policy).
  2. Choose a process layout (MPI-only vs hybrid MPI+OpenMP).
  3. Generate the script with slurm_script_generator.py.
  4. Inspect warnings (conflicts, suspicious layouts).
  5. Save the generated script as job.sbatch.
  6. Submit with sbatch job.sbatch and monitor with squeue.

CLI Examples

# Preview a job script (prints to stdout)
python3 skills/hpc-deployment/slurm-job-script-generator/scripts/slurm_script_generator.py \
  --job-name phasefield \
  --time 00:10:00 \
  --partition compute \
  --nodes 1 \
  --ntasks-per-node 8 \
  --cpus-per-task 2 \
  --mem 16G \
  --module gcc/12 \
  --module openmpi/4.1 \
  -- \
  ./simulate --config config.json

# Write to a file and also emit structured JSON
python3 skills/hpc-deployment/slurm-job-script-generator/scripts/slurm_script_generator.py \
  --job-name phasefield \
  --time 00:10:00 \
  --nodes 1 \
  --ntasks 16 \
  --cpus-per-task 1 \
  --out job.sbatch \
  --json \
  -- \
  /bin/echo hello

Conversational Workflow Example

User: I need an sbatch script for my MPI simulation. I want 2 nodes, 64 ranks per node, 2 OpenMP threads per rank, and 2 hours.

Agent workflow:

  1. Confirm partition/account and whether GPUs are needed.
  2. Generate a hybrid job script:
    python3 scripts/slurm_script_generator.py --job-name run --time 02:00:00 --nodes 2 --ntasks-per-node 64 --cpus-per-task 2 -- -- ./simulate
    
  3. Explain the mapping:
    • Total ranks = 128
    • Threads per rank = 2 (OMP_NUM_THREADS=2)
  4. If the user provides node core counts, sanity-check oversubscription using --cores-per-node.

Error Handling

ErrorCauseResolution
time must be HH:MM:SS or D-HH:MM:SSBad walltime formatUse 00:30:00 or 1-00:00:00
nodes must be positiveNon-positive nodesProvide --nodes >= 1
Provide either --mem or --mem-per-cpu, not bothConflicting memory directivesChoose one memory style
Provide a run command after --Missing launch commandAdd -- ./simulate ...

Limitations

  • Does not query cluster hardware or site policies; it can only validate internal consistency.
  • SLURM installations vary (GPU directives, QoS rules, partitions). Adjust directives for your site.

References

  • references/slurm_directives.md - Common #SBATCH directives and mapping tips

Version History

  • v1.0.0 (2026-02-25): Initial SLURM job script generator

Source

git clone https://github.com/HeshamFS/materials-simulation-skills/blob/main/skills/hpc-deployment/slurm-job-script-generator/SKILL.mdView on GitHub

Overview

SLURM Job Script Generator creates correct, copy-pasteable sbatch scripts for simulations and surfaces common configuration mistakes like bad walltime formats, memory flag conflicts, and potential oversubscription. It guides you through gathering inputs (job name, walltime, partition, account, nodes, tasks, cpus per task, memory, GPUs, modules, and run command), decides between MPI-only or MPI+OpenMP layouts, and standardizes #SBATCH directives.

How This Skill Works

It runs on Python 3.8+ with the standard library to collect required inputs, validate resource requests, and generate an sbatch script via the slurm_script_generator.py tool. It then outputs structured JSON fields (results.script, results.directives, results.derived, results.warnings) to support debugging, auditing, and easy integration.

When to Use It

  • Preparing a submission script for a new HPC simulation run.
  • Deciding between MPI-only vs hybrid MPI+OpenMP layouts based on code threading behavior.
  • Standardizing #SBATCH directives across multiple runs or projects.
  • Debugging sbatch/srun configurations to catch issues like walltime formats or memory flag conflicts.
  • Previewing or exporting a ready-to-run script in JSON/structure-friendly form for automation.

Quick Start

  1. Step 1: Run the generator with your inputs (job name, time, nodes, ntasks, cpus per task, mem, etc.).
  2. Step 2: Review directives and warnings; adjust resource requests or layout if needed.
  3. Step 3: Save as job.sbatch and submit with sbatch job.sbatch (or use --json for automated pipelines).

Best Practices

  • Gather cluster constraints (partition, account, GPU policy, memory policy) before scripting.
  • Choose MPI-only or hybrid MPI+OpenMP early, and reflect the decision in cpus-per-task and OMP_NUM_THREADS.
  • Use either --mem or --mem-per-cpu, not both; follow your cluster policy to avoid submission errors.
  • Review generated warnings for conflicts, suspicious layouts, or inconsistent resource requests.
  • Save the final script as job.sbatch and test with sbatch in a safe or preview mode before full runs.

Example Use Cases

  • MPI-only run: 2 nodes, 128 total ranks, 1 CPU per task, 32G memory, walltime 00:30:00.
  • Hybrid MPI+OpenMP: 4 nodes, 64 ranks per node, 2 OpenMP threads per rank, 64G memory, walltime 01:00:00.
  • Memory flag conflict detected: user provides both --mem and --mem-per-cpu and the generator flags an error.
  • GPU-enabled run: 2 nodes, 4 GPUs per node, cpus-per-task 4, mem 32G, walltime 02:00:00.
  • Preview export: using --json to emit script and directives without writing a file, for CI workflow.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers