Get the FREE Ultimate OpenClaw Setup Guide →
w

Local Whisper (cpp)

Verified

@wuxxin

npx machina-cli add skill @wuxxin/local-whisper-cpp --openclaw
Files (1)
SKILL.md
1022 B

Local Whisper (cpp)

Transcribe audio files locally using whisper-cli and the large-v3-turbo model.

Usage

You can use the wrapper script:

  • scripts/whisper-local.sh <audio-file>

Or call the binary directly:

  • whisper-cli -m /usr/share/whisper.cpp-model-large-v3-turbo/ggml-large-v3-turbo.bin -f <file> -l auto -nt

Scripts

  • Location: scripts/whisper-local.sh (inside skill folder)
  • Model: /usr/share/whisper.cpp-model-large-v3-turbo/ggml-large-v3-turbo.bin
  • GPU: Enabled via whisper-cli.

Setup

Download the model to /usr/share/whisper.cpp-model-large-v3-turbo/:

wget https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-large-v3-turbo.bin?download=true -O /usr/share/whisper.cpp-model-large-v3-turbo/ggml-large-v3-turbo.bin

Source

git clone https://clawhub.ai/wuxxin/local-whisper-cppView on GitHub

Overview

Transcribes audio locally using whisper-cli and the whisper.cpp large-v3-turbo model. This setup keeps data on your machine, reducing latency and improving privacy for transcription tasks.

How This Skill Works

Use either the wrapper script scripts/whisper-local.sh <audio-file> or call the whisper-cli binary directly. The process loads the model from /usr/share/whisper.cpp-model-large-v3-turbo/ggml-large-v3-turbo.bin and runs with -f <file> -l auto -nt; GPU acceleration can be enabled via whisper-cli for faster results.

When to Use It

  • Offline transcription of audio without sending data to cloud services
  • Privacy-sensitive projects where data must stay on-device
  • Transcribing single audio files quickly with GPU acceleration
  • Building local transcription workflows or batch processing pipelines
  • Using whisper.cpp large-v3-turbo for higher accuracy on local hardware

Quick Start

  1. Step 1: Download the model to /usr/share/whisper.cpp-model-large-v3-turbo/ggml-large-v3-turbo.bin using the provided wget command
  2. Step 2: Run the local transcription, e.g., scripts/whisper-local.sh <audio-file> or whisper-cli -m /usr/share/whisper.cpp-model-large-v3-turbo/ggml-large-v3-turbo.bin -f <file> -l auto -nt
  3. Step 3: Retrieve the transcript from the output path and validate accuracy

Best Practices

  • Download and store the model once at /usr/share/whisper.cpp-model-large-v3-turbo/ggml-large-v3-turbo.bin and reuse it
  • Prefer the wrapper script scripts/whisper-local.sh for simple usage, or replicate the whisper-cli -m ... -f ... -l auto -nt invocation in automation
  • Enable GPU if available to speed up transcription via whisper-cli
  • Test with representative audio to verify language detection and accuracy (-l auto)
  • Log commands and outputs to support reproducibility and auditing

Example Use Cases

  • Transcribing a team meeting recorded locally on a workstation
  • Converting an in-person interview to text for analysis without uploading audio
  • Generating podcast transcripts offline to publish alongside episodes
  • Archiving lecture recordings for accessibility in education
  • Automating captions for local video projects during post-production

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers