Get the FREE Ultimate OpenClaw Setup Guide →

Post-Training

(7 skills)

AI agent skills tagged “Post-Training” for Claude Code, Cursor, Windsurf, and more.

gptq

Orchestra-Research/AI-Research-SKILLs

4.3k

Post-training 4-bit quantization for LLMs with minimal accuracy loss. Use for deploying large models (70B, 405B) on consumer GPUs, when you need 4× memory reduction with <2% perplexity degradation, or for faster inference (3-4× speedup) vs FP16. Integrates with transformers and PEFT for QLoRA fine-tuning.

grpo-rl-training

Orchestra-Research/AI-Research-SKILLs

4.3k

Expert guidance for GRPO/RL fine-tuning with TRL for reasoning and task-specific model training

fine-tuning-with-trl

Orchestra-Research/AI-Research-SKILLs

4.3k

Fine-tune LLMs using reinforcement learning with TRL - SFT for instruction tuning, DPO for preference alignment, PPO/GRPO for reward optimization, and reward model training. Use when need RLHF, align model with preferences, or train from human feedback. Works with HuggingFace Transformers.

simpo-training

Orchestra-Research/AI-Research-SKILLs

4.3k

Simple Preference Optimization for LLM alignment. Reference-free alternative to DPO with better performance (+6.4 points on AlpacaEval 2.0). No reference model needed, more efficient than DPO. Use for preference alignment when want simpler, faster training than DPO/PPO.

slime-rl-training

Orchestra-Research/AI-Research-SKILLs

4.3k

Provides guidance for LLM post-training with RL using slime, a Megatron+SGLang framework. Use when training GLM models, implementing custom data generation workflows, or needing tight Megatron-LM integration for RL scaling.

verl-rl-training

Orchestra-Research/AI-Research-SKILLs

4.3k

Provides guidance for training LLMs with reinforcement learning using verl (Volcano Engine RL). Use when implementing RLHF, GRPO, PPO, or other RL algorithms for LLM post-training at scale with flexible infrastructure backends.

openrlhf-training

Orchestra-Research/AI-Research-SKILLs

4.3k

High-performance RLHF framework with Ray+vLLM acceleration. Use for PPO, GRPO, RLOO, DPO training of large models (7B-70B+). Built on Ray, vLLM, ZeRO-3. 2× faster than DeepSpeedChat with distributed architecture and GPU resource sharing.

Sponsor this space

Reach thousands of developers