Get the FREE Ultimate OpenClaw Setup Guide →

Fast Inference

(4 skills)

AI agent skills tagged “Fast Inference” for Claude Code, Cursor, Windsurf, and more.

sglang

Orchestra-Research/AI-Research-SKILLs

4.3k

Fast structured generation and serving for LLMs with RadixAttention prefix caching. Use for JSON/regex outputs, constrained decoding, agentic workflows with tool calls, or when you need 5× faster inference than vLLM with prefix sharing. Powers 300,000+ GPUs at xAI, AMD, NVIDIA, and LinkedIn.

awq-quantization

Orchestra-Research/AI-Research-SKILLs

4.3k

Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) on limited GPU memory, when you need faster inference than GPTQ with better accuracy preservation, or for instruction-tuned and multimodal models. MLSys 2024 Best Paper Award winner.

gptq

Orchestra-Research/AI-Research-SKILLs

4.3k

Post-training 4-bit quantization for LLMs with minimal accuracy loss. Use for deploying large models (70B, 405B) on consumer GPUs, when you need 4× memory reduction with <2% perplexity degradation, or for faster inference (3-4× speedup) vs FP16. Integrates with transformers and PEFT for QLoRA fine-tuning.

speculative-decoding

Orchestra-Research/AI-Research-SKILLs

4.3k

Accelerate LLM inference using speculative decoding, Medusa multiple heads, and lookahead decoding techniques. Use when optimizing inference speed (1.5-3.6× speedup), reducing latency for real-time applications, or deploying models with limited compute. Covers draft models, tree-based attention, Jacobi iteration, parallel token generation, and production deployment strategies.

Sponsor this space

Reach thousands of developers