Get the FREE Ultimate OpenClaw Setup Guide →

Orchestra-Research/AI-Research-SKILLs Skills

(85)

Browse AI agent skills from Orchestra-Research/AI-Research-SKILLs for Claude Code, OpenClaw, Cursor, Windsurf, and more. Install them with a single command to extend what your agents can do.

speculative-decoding

Orchestra-Research/AI-Research-SKILLs

4.3k

Accelerate LLM inference using speculative decoding, Medusa multiple heads, and lookahead decoding techniques. Use when optimizing inference speed (1.5-3.6× speedup), reducing latency for real-time applications, or deploying models with limited compute. Covers draft models, tree-based attention, Jacobi iteration, parallel token generation, and production deployment strategies.

stable-diffusion-image-generation

Orchestra-Research/AI-Research-SKILLs

4.3k

State-of-the-art text-to-image generation with Stable Diffusion models via HuggingFace Diffusers. Use when generating images from text prompts, performing image-to-image translation, inpainting, or building custom diffusion pipelines.

tensorboard

Orchestra-Research/AI-Research-SKILLs

4.3k

Visualize training metrics, debug models with histograms, compare experiments, visualize model graphs, and profile performance with TensorBoard - Google's ML visualization toolkit

tensorrt-llm

Orchestra-Research/AI-Research-SKILLs

4.3k

Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than PyTorch, or for serving models with quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.

torchforge-rl-training

Orchestra-Research/AI-Research-SKILLs

4.3k

Provides guidance for PyTorch-native agentic RL using torchforge, Meta's library separating infra from algorithms. Use when you want clean RL abstractions, easy algorithm experimentation, or scalable training with Monarch and TorchTitan.

distributed-llm-pretraining-torchtitan

Orchestra-Research/AI-Research-SKILLs

4.3k

Provides PyTorch-native distributed LLM pretraining using torchtitan with 4D parallelism (FSDP2, TP, PP, CP). Use when pretraining Llama 3.1, DeepSeek V3, or custom models at scale from 8 to 512+ GPUs with Float8, torch.compile, and distributed checkpointing.

transformer-lens-interpretability

Orchestra-Research/AI-Research-SKILLs

4.3k

Provides guidance for mechanistic interpretability research using TransformerLens to inspect and manipulate transformer internals via HookPoints and activation caching. Use when reverse-engineering model algorithms, studying attention patterns, or performing activation patching experiments.

fine-tuning-with-trl

Orchestra-Research/AI-Research-SKILLs

4.3k

Fine-tune LLMs using reinforcement learning with TRL - SFT for instruction tuning, DPO for preference alignment, PPO/GRPO for reward optimization, and reward model training. Use when need RLHF, align model with preferences, or train from human feedback. Works with HuggingFace Transformers.

unsloth

Orchestra-Research/AI-Research-SKILLs

4.3k

Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

verl-rl-training

Orchestra-Research/AI-Research-SKILLs

4.3k

Provides guidance for training LLMs with reinforcement learning using verl (Volcano Engine RL). Use when implementing RLHF, GRPO, PPO, or other RL algorithms for LLM post-training at scale with flexible infrastructure backends.

serving-llms-vllm

Orchestra-Research/AI-Research-SKILLs

4.3k

Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference latency/throughput, or serving models with limited GPU memory. Supports OpenAI-compatible endpoints, quantization (GPTQ/AWQ/FP8), and tensor parallelism.

weights-and-biases

Orchestra-Research/AI-Research-SKILLs

4.3k

Track ML experiments with automatic logging, visualize training in real-time, optimize hyperparameters with sweeps, and manage model registry with W&B - collaborative MLOps platform

whisper

Orchestra-Research/AI-Research-SKILLs

4.3k

OpenAI's general-purpose speech recognition model. Supports 99 languages, transcription, translation to English, and language identification. Six model sizes from tiny (39M params) to large (1550M params). Use for speech-to-text, podcast transcription, or multilingual audio processing. Best for robust, multilingual ASR.

Sponsor this space

Reach thousands of developers