Get the FREE Ultimate OpenClaw Setup Guide →
npx machina-cli add skill Orchestra-Research/AI-Research-SKILLs/simpo --openclaw
Files (1)
SKILL.md
5.8 KB

SimPO - Simple Preference Optimization

Quick start

SimPO is a reference-free preference optimization method that outperforms DPO without needing a reference model.

Installation:

# Create environment
conda create -n simpo python=3.10 && conda activate simpo

# Install PyTorch 2.2.2
# Visit: https://pytorch.org/get-started/locally/

# Install alignment-handbook
git clone https://github.com/huggingface/alignment-handbook.git
cd alignment-handbook
python -m pip install .

# Install Flash Attention 2
python -m pip install flash-attn --no-build-isolation

Training (Mistral 7B):

ACCELERATE_LOG_LEVEL=info accelerate launch \
  --config_file accelerate_configs/deepspeed_zero3.yaml \
  scripts/run_simpo.py \
  training_configs/mistral-7b-base-simpo.yaml

Common workflows

Workflow 1: Train from base model (Mistral 7B)

Config (mistral-7b-base-simpo.yaml):

# Model
model_name_or_path: mistralai/Mistral-7B-v0.1
torch_dtype: bfloat16

# Dataset
dataset_mixer:
  HuggingFaceH4/ultrafeedback_binarized: 1.0
dataset_splits:
  - train_prefs
  - test_prefs

# SimPO hyperparameters
beta: 2.0                  # Reward scaling (2.0-10.0)
gamma_beta_ratio: 0.5       # Target margin (0-1)
loss_type: sigmoid          # sigmoid or hinge
sft_weight: 0.0             # Optional SFT regularization

# Training
learning_rate: 5e-7         # Critical: 3e-7 to 1e-6
num_train_epochs: 1
per_device_train_batch_size: 1
gradient_accumulation_steps: 8

# Output
output_dir: ./outputs/mistral-7b-simpo

Launch training:

accelerate launch --config_file accelerate_configs/deepspeed_zero3.yaml \
  scripts/run_simpo.py training_configs/mistral-7b-base-simpo.yaml

Workflow 2: Fine-tune instruct model (Llama 3 8B)

Config (llama3-8b-instruct-simpo.yaml):

model_name_or_path: meta-llama/Meta-Llama-3-8B-Instruct

dataset_mixer:
  argilla/ultrafeedback-binarized-preferences-cleaned: 1.0

beta: 2.5
gamma_beta_ratio: 0.5
learning_rate: 5e-7
sft_weight: 0.1             # Add SFT loss to preserve capabilities

num_train_epochs: 1
per_device_train_batch_size: 2
gradient_accumulation_steps: 4
output_dir: ./outputs/llama3-8b-simpo

Launch:

accelerate launch --config_file accelerate_configs/deepspeed_zero3.yaml \
  scripts/run_simpo.py training_configs/llama3-8b-instruct-simpo.yaml

Workflow 3: Reasoning-intensive tasks (lower LR)

For math/code tasks:

model_name_or_path: deepseek-ai/deepseek-math-7b-base

dataset_mixer:
  argilla/distilabel-math-preference-dpo: 1.0

beta: 5.0                   # Higher for stronger signal
gamma_beta_ratio: 0.7       # Larger margin
learning_rate: 3e-7         # Lower LR for reasoning
sft_weight: 0.0

num_train_epochs: 1
per_device_train_batch_size: 1
gradient_accumulation_steps: 16

When to use vs alternatives

Use SimPO when:

  • Want simpler training than DPO (no reference model)
  • Have preference data (chosen/rejected pairs)
  • Need better performance than DPO
  • Limited compute resources
  • Single-node training sufficient

Algorithm selection:

  • SimPO: Simplest, best performance, no reference model
  • DPO: Need reference model baseline, more conservative
  • PPO: Maximum control, need reward model, complex setup
  • GRPO: Memory-efficient RL, no critic

Use alternatives instead:

  • OpenRLHF: Multi-node distributed training, PPO/GRPO
  • TRL: Need multiple methods in one framework
  • DPO: Established baseline comparison

Common issues

Issue: Loss divergence

Reduce learning rate:

learning_rate: 3e-7  # Reduce from 5e-7

Reduce beta:

beta: 1.0  # Reduce from 2.0

Issue: Model forgets capabilities

Add SFT regularization:

sft_weight: 0.1  # Add SFT loss component

Issue: Poor preference separation

Increase beta and margin:

beta: 5.0            # Increase from 2.0
gamma_beta_ratio: 0.8  # Increase from 0.5

Issue: OOM during training

Reduce batch size:

per_device_train_batch_size: 1
gradient_accumulation_steps: 16  # Maintain effective batch

Enable gradient checkpointing:

gradient_checkpointing: true

Advanced topics

Loss functions: See references/loss-functions.md for sigmoid vs hinge loss, mathematical formulations, and when to use each.

Hyperparameter tuning: See references/hyperparameters.md for beta, gamma, learning rate selection guide, and model-size-specific recommendations.

Dataset preparation: See references/datasets.md for preference data formats, quality filtering, and custom dataset creation.

Hardware requirements

  • GPU: NVIDIA A100/H100 recommended
  • VRAM:
    • 7B model: 1× A100 40GB (DeepSpeed ZeRO-3)
    • 8B model: 2× A100 40GB
    • 70B model: 8× A100 80GB
  • Single-node: DeepSpeed ZeRO-3 sufficient
  • Mixed precision: BF16 recommended

Memory optimization:

  • DeepSpeed ZeRO-3 (default config)
  • Gradient checkpointing
  • Flash Attention 2

Resources

Source

git clone https://github.com/Orchestra-Research/AI-Research-SKILLs/blob/main/06-post-training/simpo/SKILL.mdView on GitHub

Overview

SimPO is a reference-free method for aligning LLMs using preference data. It claims better performance than DPO (+6.4 points on AlpacaEval 2.0) and requires no reference model, making training simpler and more efficient than DPO/PPO.

How This Skill Works

SimPO treats human or automated preferences as rewards and optimizes an alignment objective without a reference model. It exposes configurable hyperparameters (beta for reward scaling, gamma_beta_ratio for target margin, and loss_type such as sigmoid or hinge) and optional SFT regularization, with training run via Accelerate for efficiency.

When to Use It

  • You want simpler, reference-free alignment training
  • You have preference data (chosen/rejected pairs)
  • You need better performance than DPO
  • You have limited compute resources
  • You can train on a single node

Quick Start

  1. Step 1: Set up environment and dependencies (conda env, PyTorch, alignment-handbook, flash-attn)
  2. Step 2: Choose a base model and create a config (e.g., mistral-7b-base-simpo.yaml) with dataset_prefs and hyperparameters
  3. Step 3: Run training: accelerate launch --config_file accelerate_configs/deepspeed_zero3.yaml scripts/run_simpo.py training_configs/mistral-7b-base-simpo.yaml

Best Practices

  • Start with a small learning rate (e.g., 5e-7) and 1 training epoch
  • Tune beta and gamma_beta_ratio to balance signal and margin
  • Choose loss_type (sigmoid or hinge) and adjust sft_weight to preserve capabilities
  • Use train_prefs/test_prefs with a clean dataset_mixer (e.g., Ultrafeedback)
  • Prefer single-node training with modest batch sizes (per_device_train_batch_size, gradient_accumulation_steps)

Example Use Cases

  • Workflow 1: Train Mistral-7B base with ultrafeedback preferences (mistral-7b-base-simpo.yaml)
  • Workflow 2: Fine-tune Llama-3-8B-Instruct with ultrafeedback-cleaned preferences (llama3-8b-instruct-simpo.yaml)
  • Workflow 3: Reasoning tasks on math/code (deepseek-math-7b-base with distilabel-dpo dataset and beta=5.0)
  • Workflow 2 highlight: include sft_weight (0.1) to preserve capabilities during Llama-3-8B-Instruct simpo training
  • General use: Switch from DPO/PPO to SimPO for simpler, faster alignment on single-node setups

Frequently Asked Questions

Add this skill to your agents

Related Skills

gptq

Orchestra-Research/AI-Research-SKILLs

Post-training 4-bit quantization for LLMs with minimal accuracy loss. Use for deploying large models (70B, 405B) on consumer GPUs, when you need 4× memory reduction with <2% perplexity degradation, or for faster inference (3-4× speedup) vs FP16. Integrates with transformers and PEFT for QLoRA fine-tuning.

grpo-rl-training

Orchestra-Research/AI-Research-SKILLs

Expert guidance for GRPO/RL fine-tuning with TRL for reasoning and task-specific model training

fine-tuning-with-trl

Orchestra-Research/AI-Research-SKILLs

Fine-tune LLMs using reinforcement learning with TRL - SFT for instruction tuning, DPO for preference alignment, PPO/GRPO for reward optimization, and reward model training. Use when need RLHF, align model with preferences, or train from human feedback. Works with HuggingFace Transformers.

slime-rl-training

Orchestra-Research/AI-Research-SKILLs

Provides guidance for LLM post-training with RL using slime, a Megatron+SGLang framework. Use when training GLM models, implementing custom data generation workflows, or needing tight Megatron-LM integration for RL scaling.

verl-rl-training

Orchestra-Research/AI-Research-SKILLs

Provides guidance for training LLMs with reinforcement learning using verl (Volcano Engine RL). Use when implementing RLHF, GRPO, PPO, or other RL algorithms for LLM post-training at scale with flexible infrastructure backends.

openrlhf-training

Orchestra-Research/AI-Research-SKILLs

High-performance RLHF framework with Ray+vLLM acceleration. Use for PPO, GRPO, RLOO, DPO training of large models (7B-70B+). Built on Ray, vLLM, ZeRO-3. 2× faster than DeepSpeedChat with distributed architecture and GPU resource sharing.

Sponsor this space

Reach thousands of developers