tensorrt-llm
Scannednpx machina-cli add skill Orchestra-Research/AI-Research-SKILLs/tensorrt-llm --openclawTensorRT-LLM
NVIDIA's open-source library for optimizing LLM inference with state-of-the-art performance on NVIDIA GPUs.
When to use TensorRT-LLM
Use TensorRT-LLM when:
- Deploying on NVIDIA GPUs (A100, H100, GB200)
- Need maximum throughput (24,000+ tokens/sec on Llama 3)
- Require low latency for real-time applications
- Working with quantized models (FP8, INT4, FP4)
- Scaling across multiple GPUs or nodes
Use vLLM instead when:
- Need simpler setup and Python-first API
- Want PagedAttention without TensorRT compilation
- Working with AMD GPUs or non-NVIDIA hardware
Use llama.cpp instead when:
- Deploying on CPU or Apple Silicon
- Need edge deployment without NVIDIA GPUs
- Want simpler GGUF quantization format
Quick start
Installation
# Docker (recommended)
docker pull nvidia/tensorrt_llm:latest
# pip install
pip install tensorrt_llm==1.2.0rc3
# Requires CUDA 13.0.0, TensorRT 10.13.2, Python 3.10-3.12
Basic inference
from tensorrt_llm import LLM, SamplingParams
# Initialize model
llm = LLM(model="meta-llama/Meta-Llama-3-8B")
# Configure sampling
sampling_params = SamplingParams(
max_tokens=100,
temperature=0.7,
top_p=0.9
)
# Generate
prompts = ["Explain quantum computing"]
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
print(output.text)
Serving with trtllm-serve
# Start server (automatic model download and compilation)
trtllm-serve meta-llama/Meta-Llama-3-8B \
--tp_size 4 \ # Tensor parallelism (4 GPUs)
--max_batch_size 256 \
--max_num_tokens 4096
# Client request
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "meta-llama/Meta-Llama-3-8B",
"messages": [{"role": "user", "content": "Hello!"}],
"temperature": 0.7,
"max_tokens": 100
}'
Key features
Performance optimizations
- In-flight batching: Dynamic batching during generation
- Paged KV cache: Efficient memory management
- Flash Attention: Optimized attention kernels
- Quantization: FP8, INT4, FP4 for 2-4× faster inference
- CUDA graphs: Reduced kernel launch overhead
Parallelism
- Tensor parallelism (TP): Split model across GPUs
- Pipeline parallelism (PP): Layer-wise distribution
- Expert parallelism: For Mixture-of-Experts models
- Multi-node: Scale beyond single machine
Advanced features
- Speculative decoding: Faster generation with draft models
- LoRA serving: Efficient multi-adapter deployment
- Disaggregated serving: Separate prefill and generation
Common patterns
Quantized model (FP8)
from tensorrt_llm import LLM
# Load FP8 quantized model (2× faster, 50% memory)
llm = LLM(
model="meta-llama/Meta-Llama-3-70B",
dtype="fp8",
max_num_tokens=8192
)
# Inference same as before
outputs = llm.generate(["Summarize this article..."])
Multi-GPU deployment
# Tensor parallelism across 8 GPUs
llm = LLM(
model="meta-llama/Meta-Llama-3-405B",
tensor_parallel_size=8,
dtype="fp8"
)
Batch inference
# Process 100 prompts efficiently
prompts = [f"Question {i}: ..." for i in range(100)]
outputs = llm.generate(
prompts,
sampling_params=SamplingParams(max_tokens=200)
)
# Automatic in-flight batching for maximum throughput
Performance benchmarks
Meta Llama 3-8B (H100 GPU):
- Throughput: 24,000 tokens/sec
- Latency: ~10ms per token
- vs PyTorch: 100× faster
Llama 3-70B (8× A100 80GB):
- FP8 quantization: 2× faster than FP16
- Memory: 50% reduction with FP8
Supported models
- LLaMA family: Llama 2, Llama 3, CodeLlama
- GPT family: GPT-2, GPT-J, GPT-NeoX
- Qwen: Qwen, Qwen2, QwQ
- DeepSeek: DeepSeek-V2, DeepSeek-V3
- Mixtral: Mixtral-8x7B, Mixtral-8x22B
- Vision: LLaVA, Phi-3-vision
- 100+ models on HuggingFace
References
- Optimization Guide - Quantization, batching, KV cache tuning
- Multi-GPU Setup - Tensor/pipeline parallelism, multi-node
- Serving Guide - Production deployment, monitoring, autoscaling
Resources
Source
git clone https://github.com/Orchestra-Research/AI-Research-SKILLs/blob/main/12-inference-serving/tensorrt-llm/SKILL.mdView on GitHub Overview
TensorRT-LLM is NVIDIA's library for optimizing LLM inference with state-of-the-art throughput and latency on NVIDIA GPUs (A100/H100). It enables 10-100x faster inference than PyTorch and supports quantization (FP8/INT4), in-flight batching, and multi-GPU scaling for production deployment.
How This Skill Works
The tool compiles LLM models to TensorRT kernels, applying quantization and optimized attention to maximize throughput. It supports in-flight batching, CUDA graphs, and various parallelism strategies (tensor, pipeline, expert) across GPUs or nodes, with a simple serving interface via trtllm-serve for production deployments.
When to Use It
- Deploying on NVIDIA GPUs (A100, H100, GB200)
- Need maximum throughput (e.g., 24,000+ tokens/sec on Llama 3)
- Require low latency for real-time applications
- Working with quantized models (FP8, INT4, FP4)
- Scaling across multiple GPUs or nodes
Quick Start
- Step 1: Install runtime (docker pull nvidia/tensorrt_llm:latest or pip install tensorrt_llm==1.2.0rc3)
- Step 2: Initialize the LLM with a model, e.g., LLM(model='meta-llama/Meta-Llama-3-8B') and optional dtype='fp8'
- Step 3: Run inference or start the trtllm-serve server (e.g., llm.generate(...) or trtllm-serve meta-llama/Meta-Llama-3-8B --tp_size 4 --max_batch_size 256 --max_num_tokens 4096)
Best Practices
- Enable in-flight batching and tune max_batch_size to maximize throughput
- Use FP8/INT4/FP4 quantization when model accuracy is acceptable to gain 2-4x speedups
- Leverage tensor/pipeline/expert parallelism and multi-node deployment for large models
- Utilize CUDA graphs to reduce kernel launch overhead
- Monitor GPU utilization and adjust TP/PP settings to balance latency and throughput
Example Use Cases
- Achieved 24k+ tokens/sec on Llama 3 with A100/H100 using TensorRT-LLM
- 2–4x faster inference when quantizing models to FP8/INT4 (e.g., Meta-Llama-3-70B FP8)
- Multi-GPU scaling with tensor parallelism across 8 GPUs for large models
- In-flight batching enabled for bursty workloads
- trtllm-serve powering production endpoints across multiple nodes
Frequently Asked Questions
Related Skills
deepspeed
Orchestra-Research/AI-Research-SKILLs
Expert guidance for distributed training with DeepSpeed - ZeRO optimization stages, pipeline parallelism, FP16/BF16/FP8, 1-bit Adam, sparse attention
langchain
Orchestra-Research/AI-Research-SKILLs
Framework for building LLM-powered applications with agents, chains, and RAG. Supports multiple providers (OpenAI, Anthropic, Google), 500+ integrations, ReAct agents, tool calling, memory management, and vector store retrieval. Use for building chatbots, question-answering systems, autonomous agents, or RAG applications. Best for rapid prototyping and production deployments.
langsmith-observability
Orchestra-Research/AI-Research-SKILLs
LLM observability platform for tracing, evaluation, and monitoring. Use when debugging LLM applications, evaluating model outputs against datasets, monitoring production systems, or building systematic testing pipelines for AI applications.
nemo-evaluator-sdk
Orchestra-Research/AI-Research-SKILLs
Evaluates LLMs across 100+ benchmarks from 18+ harnesses (MMLU, HumanEval, GSM8K, safety, VLM) with multi-backend execution. Use when needing scalable evaluation on local Docker, Slurm HPC, or cloud platforms. NVIDIA's enterprise-grade platform with container-first architecture for reproducible benchmarking.
nemo-guardrails
Orchestra-Research/AI-Research-SKILLs
NVIDIA's runtime safety framework for LLM applications. Features jailbreak detection, input/output validation, fact-checking, hallucination detection, PII filtering, toxicity detection. Uses Colang 2.0 DSL for programmable rails. Production-ready, runs on T4 GPU.
sentence-transformers
Orchestra-Research/AI-Research-SKILLs
Framework for state-of-the-art sentence, text, and image embeddings. Provides 5000+ pre-trained models for semantic similarity, clustering, and retrieval. Supports multilingual, domain-specific, and multimodal models. Use for generating embeddings for RAG, semantic search, or similarity tasks. Best for production embedding generation.