Get the FREE Ultimate OpenClaw Setup Guide →
npx machina-cli add skill Orchestra-Research/AI-Research-SKILLs/llama-cpp --openclaw
Files (1)
SKILL.md
5.8 KB

llama.cpp

Pure C/C++ LLM inference with minimal dependencies, optimized for CPUs and non-NVIDIA hardware.

When to use llama.cpp

Use llama.cpp when:

  • Running on CPU-only machines
  • Deploying on Apple Silicon (M1/M2/M3/M4)
  • Using AMD or Intel GPUs (no CUDA)
  • Edge deployment (Raspberry Pi, embedded systems)
  • Need simple deployment without Docker/Python

Use TensorRT-LLM instead when:

  • Have NVIDIA GPUs (A100/H100)
  • Need maximum throughput (100K+ tok/s)
  • Running in datacenter with CUDA

Use vLLM instead when:

  • Have NVIDIA GPUs
  • Need Python-first API
  • Want PagedAttention

Quick start

Installation

# macOS/Linux
brew install llama.cpp

# Or build from source
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make

# With Metal (Apple Silicon)
make LLAMA_METAL=1

# With CUDA (NVIDIA)
make LLAMA_CUDA=1

# With ROCm (AMD)
make LLAMA_HIP=1

Download model

# Download from HuggingFace (GGUF format)
huggingface-cli download \
    TheBloke/Llama-2-7B-Chat-GGUF \
    llama-2-7b-chat.Q4_K_M.gguf \
    --local-dir models/

# Or convert from HuggingFace
python convert_hf_to_gguf.py models/llama-2-7b-chat/

Run inference

# Simple chat
./llama-cli \
    -m models/llama-2-7b-chat.Q4_K_M.gguf \
    -p "Explain quantum computing" \
    -n 256  # Max tokens

# Interactive chat
./llama-cli \
    -m models/llama-2-7b-chat.Q4_K_M.gguf \
    --interactive

Server mode

# Start OpenAI-compatible server
./llama-server \
    -m models/llama-2-7b-chat.Q4_K_M.gguf \
    --host 0.0.0.0 \
    --port 8080 \
    -ngl 32  # Offload 32 layers to GPU

# Client request
curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "llama-2-7b-chat",
    "messages": [{"role": "user", "content": "Hello!"}],
    "temperature": 0.7,
    "max_tokens": 100
  }'

Quantization formats

GGUF format overview

FormatBitsSize (7B)SpeedQualityUse Case
Q4_K_M4.54.1 GBFastGoodRecommended default
Q4_K_S4.33.9 GBFasterLowerSpeed critical
Q5_K_M5.54.8 GBMediumBetterQuality critical
Q6_K6.55.5 GBSlowerBestMaximum quality
Q8_08.07.0 GBSlowExcellentMinimal degradation
Q2_K2.52.7 GBFastestPoorTesting only

Choosing quantization

# General use (balanced)
Q4_K_M  # 4-bit, medium quality

# Maximum speed (more degradation)
Q2_K or Q3_K_M

# Maximum quality (slower)
Q6_K or Q8_0

# Very large models (70B, 405B)
Q3_K_M or Q4_K_S  # Lower bits to fit in memory

Hardware acceleration

Apple Silicon (Metal)

# Build with Metal
make LLAMA_METAL=1

# Run with GPU acceleration (automatic)
./llama-cli -m model.gguf -ngl 999  # Offload all layers

# Performance: M3 Max 40-60 tokens/sec (Llama 2-7B Q4_K_M)

NVIDIA GPUs (CUDA)

# Build with CUDA
make LLAMA_CUDA=1

# Offload layers to GPU
./llama-cli -m model.gguf -ngl 35  # Offload 35/40 layers

# Hybrid CPU+GPU for large models
./llama-cli -m llama-70b.Q4_K_M.gguf -ngl 20  # GPU: 20 layers, CPU: rest

AMD GPUs (ROCm)

# Build with ROCm
make LLAMA_HIP=1

# Run with AMD GPU
./llama-cli -m model.gguf -ngl 999

Common patterns

Batch processing

# Process multiple prompts from file
cat prompts.txt | ./llama-cli \
    -m model.gguf \
    --batch-size 512 \
    -n 100

Constrained generation

# JSON output with grammar
./llama-cli \
    -m model.gguf \
    -p "Generate a person: " \
    --grammar-file grammars/json.gbnf

# Outputs valid JSON only

Context size

# Increase context (default 512)
./llama-cli \
    -m model.gguf \
    -c 4096  # 4K context window

# Very long context (if model supports)
./llama-cli -m model.gguf -c 32768  # 32K context

Performance benchmarks

CPU performance (Llama 2-7B Q4_K_M)

CPUThreadsSpeedCost
Apple M3 Max1650 tok/s$0 (local)
AMD Ryzen 9 7950X3235 tok/s$0.50/hour
Intel i9-13900K3230 tok/s$0.40/hour
AWS c7i.16xlarge6440 tok/s$2.88/hour

GPU acceleration (Llama 2-7B Q4_K_M)

GPUSpeedvs CPUCost
NVIDIA RTX 4090120 tok/s3-4×$0 (local)
NVIDIA A1080 tok/s2-3×$1.00/hour
AMD MI25070 tok/s$2.00/hour
Apple M3 Max (Metal)50 tok/s~Same$0 (local)

Supported models

LLaMA family:

  • Llama 2 (7B, 13B, 70B)
  • Llama 3 (8B, 70B, 405B)
  • Code Llama

Mistral family:

  • Mistral 7B
  • Mixtral 8x7B, 8x22B

Other:

  • Falcon, BLOOM, GPT-J
  • Phi-3, Gemma, Qwen
  • LLaVA (vision), Whisper (audio)

Find models: https://huggingface.co/models?library=gguf

References

Resources

Source

git clone https://github.com/Orchestra-Research/AI-Research-SKILLs/blob/main/12-inference-serving/llama-cpp/SKILL.mdView on GitHub

Overview

llama.cpp provides pure C/C++ LLM inference with minimal dependencies, optimized for CPUs and non-NVIDIA hardware. It shines for edge deployments on Macs (M1/M2/M3) and on AMD/Intel GPUs, and it supports GGUF quantization (1.5-8 bit) to reduce memory and deliver 4–10× speedups versus PyTorch on CPU.

How This Skill Works

Built as a pure C/C++ inference engine, llama.cpp runs models on CPU or non-NVIDIA GPUs without CUDA. It supports GGUF quantization to shrink memory footprints and accelerate performance, and is controllable via build flags (LLAMA_METAL for Apple Silicon, LLAMA_CUDA for NVIDIA, LLAMA_HIP for AMD). It also provides a server mode (llama-server) and a lightweight Python binding (llama-cpp-python) for integration.

When to Use It

  • Running on CPU-only machines (no CUDA)
  • Deploying on Apple Silicon (M1/M2/M3)
  • Using AMD or Intel GPUs without CUDA
  • Edge deployment on Raspberry Pi or embedded systems
  • Desire a simple deployment without Docker or Python

Quick Start

  1. Step 1: Install or build llama.cpp (brew install llama.cpp on macOS/Linux or clone the repo and run make; enable Metal/CUDA/ROCm as needed)
  2. Step 2: Download or convert a GGUF model (huggingface-cli download TheBloke/Llama-2-7B-Chat-GGUF llama-2-7b-chat.Q4_K_M.gguf or run convert_hf_to_gguf.py)
  3. Step 3: Run a quick test (llama-cli -m models/llama-2-7b-chat.Q4_K_M.gguf -p 'Explain quantum computing' -n 256) or start the server (llama-server ...)

Best Practices

  • Start with GGUF Q4_K_M (4-bit, balanced) as the default to balance memory, speed, and quality
  • Choose quantization bits to trade memory, speed, and quality (Q2_K, Q4_K_M, Q6_K, Q8_0, etc.)
  • Build with the correct hardware flags: LLAMA_METAL for Apple Silicon; LLAMA_CUDA for NVIDIA; LLAMA_HIP for AMD
  • Use llama-server for an OpenAI-compatible API endpoint to service multiple clients
  • Benchmark different model sizes and quantizations to fit RAM and latency constraints

Example Use Cases

  • Running Llama-2-7B-GGUF on a MacBook Pro with Apple Silicon acceleration
  • CPU-only edge deployment on a Raspberry Pi using GGUF 4-bit quantization
  • Inference on AMD/Intel GPUs without CUDA using ROCm/hip offload
  • Serving a local chat API via llama-server for internal tools
  • Converting HuggingFace models to GGUF and validating performance on CPU

Frequently Asked Questions

Add this skill to your agents

Related Skills

quantizing-models-bitsandbytes

Orchestra-Research/AI-Research-SKILLs

Quantizes LLMs to 8-bit or 4-bit for 50-75% memory reduction with minimal accuracy loss. Use when GPU memory is limited, need to fit larger models, or want faster inference. Supports INT8, NF4, FP4 formats, QLoRA training, and 8-bit optimizers. Works with HuggingFace Transformers.

sglang

Orchestra-Research/AI-Research-SKILLs

Fast structured generation and serving for LLMs with RadixAttention prefix caching. Use for JSON/regex outputs, constrained decoding, agentic workflows with tool calls, or when you need 5× faster inference than vLLM with prefix sharing. Powers 300,000+ GPUs at xAI, AMD, NVIDIA, and LinkedIn.

awq-quantization

Orchestra-Research/AI-Research-SKILLs

Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) on limited GPU memory, when you need faster inference than GPTQ with better accuracy preservation, or for instruction-tuned and multimodal models. MLSys 2024 Best Paper Award winner.

gptq

Orchestra-Research/AI-Research-SKILLs

Post-training 4-bit quantization for LLMs with minimal accuracy loss. Use for deploying large models (70B, 405B) on consumer GPUs, when you need 4× memory reduction with <2% perplexity degradation, or for faster inference (3-4× speedup) vs FP16. Integrates with transformers and PEFT for QLoRA fine-tuning.

hqq-quantization

Orchestra-Research/AI-Research-SKILLs

Half-Quadratic Quantization for LLMs without calibration data. Use when quantizing models to 4/3/2-bit precision without needing calibration datasets, for fast quantization workflows, or when deploying with vLLM or HuggingFace Transformers.

tensorrt-llm

Orchestra-Research/AI-Research-SKILLs

Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than PyTorch, or for serving models with quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.

Sponsor this space

Reach thousands of developers