Get the FREE Ultimate OpenClaw Setup Guide →

QLoRA

(7 skills)

AI agent skills tagged “QLoRA” for Claude Code, Cursor, Windsurf, and more.

quantizing-models-bitsandbytes

Orchestra-Research/AI-Research-SKILLs

4.3k

Quantizes LLMs to 8-bit or 4-bit for 50-75% memory reduction with minimal accuracy loss. Use when GPU memory is limited, need to fit larger models, or want faster inference. Supports INT8, NF4, FP4 formats, QLoRA training, and 8-bit optimizers. Works with HuggingFace Transformers.

axolotl

Orchestra-Research/AI-Research-SKILLs

4.3k

Expert guidance for fine-tuning LLMs with Axolotl - YAML configs, 100+ models, LoRA/QLoRA, DPO/KTO/ORPO/GRPO, multimodal support

unsloth

Orchestra-Research/AI-Research-SKILLs

4.3k

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

gptq

Orchestra-Research/AI-Research-SKILLs

4.3k

Post-training 4-bit quantization for LLMs with minimal accuracy loss. Use for deploying large models (70B, 405B) on consumer GPUs, when you need 4× memory reduction with <2% perplexity degradation, or for faster inference (3-4× speedup) vs FP16. Integrates with transformers and PEFT for QLoRA fine-tuning.

implementing-llms-litgpt

Orchestra-Research/AI-Research-SKILLs

4.3k

Implements and trains LLMs using Lightning AI's LitGPT with 20+ pretrained architectures (Llama, Gemma, Phi, Qwen, Mistral). Use when need clean model implementations, educational understanding of architectures, or production fine-tuning with LoRA/QLoRA. Single-file implementations, no abstraction layers.

llama-factory

Orchestra-Research/AI-Research-SKILLs

4.3k

Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support

peft-fine-tuning

Orchestra-Research/AI-Research-SKILLs

4.3k

Parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and 25+ methods. Use when fine-tuning large models (7B-70B) with limited GPU memory, when you need to train <1% of parameters with minimal accuracy loss, or for multi-adapter serving. HuggingFace's official library integrated with transformers ecosystem.

Sponsor this space

Reach thousands of developers