Get the FREE Ultimate OpenClaw Setup Guide →
npx machina-cli add skill Orchestra-Research/AI-Research-SKILLs/whisper --openclaw
Files (1)
SKILL.md
7.3 KB

Whisper - Robust Speech Recognition

OpenAI's multilingual speech recognition model.

When to use Whisper

Use when:

  • Speech-to-text transcription (99 languages)
  • Podcast/video transcription
  • Meeting notes automation
  • Translation to English
  • Noisy audio transcription
  • Multilingual audio processing

Metrics:

  • 72,900+ GitHub stars
  • 99 languages supported
  • Trained on 680,000 hours of audio
  • MIT License

Use alternatives instead:

  • AssemblyAI: Managed API, speaker diarization
  • Deepgram: Real-time streaming ASR
  • Google Speech-to-Text: Cloud-based

Quick start

Installation

# Requires Python 3.8-3.11
pip install -U openai-whisper

# Requires ffmpeg
# macOS: brew install ffmpeg
# Ubuntu: sudo apt install ffmpeg
# Windows: choco install ffmpeg

Basic transcription

import whisper

# Load model
model = whisper.load_model("base")

# Transcribe
result = model.transcribe("audio.mp3")

# Print text
print(result["text"])

# Access segments
for segment in result["segments"]:
    print(f"[{segment['start']:.2f}s - {segment['end']:.2f}s] {segment['text']}")

Model sizes

# Available models
models = ["tiny", "base", "small", "medium", "large", "turbo"]

# Load specific model
model = whisper.load_model("turbo")  # Fastest, good quality
ModelParametersEnglish-onlyMultilingualSpeedVRAM
tiny39M~32x~1 GB
base74M~16x~1 GB
small244M~6x~2 GB
medium769M~2x~5 GB
large1550M1x~10 GB
turbo809M~8x~6 GB

Recommendation: Use turbo for best speed/quality, base for prototyping

Transcription options

Language specification

# Auto-detect language
result = model.transcribe("audio.mp3")

# Specify language (faster)
result = model.transcribe("audio.mp3", language="en")

# Supported: en, es, fr, de, it, pt, ru, ja, ko, zh, and 89 more

Task selection

# Transcription (default)
result = model.transcribe("audio.mp3", task="transcribe")

# Translation to English
result = model.transcribe("spanish.mp3", task="translate")
# Input: Spanish audio → Output: English text

Initial prompt

# Improve accuracy with context
result = model.transcribe(
    "audio.mp3",
    initial_prompt="This is a technical podcast about machine learning and AI."
)

# Helps with:
# - Technical terms
# - Proper nouns
# - Domain-specific vocabulary

Timestamps

# Word-level timestamps
result = model.transcribe("audio.mp3", word_timestamps=True)

for segment in result["segments"]:
    for word in segment["words"]:
        print(f"{word['word']} ({word['start']:.2f}s - {word['end']:.2f}s)")

Temperature fallback

# Retry with different temperatures if confidence low
result = model.transcribe(
    "audio.mp3",
    temperature=(0.0, 0.2, 0.4, 0.6, 0.8, 1.0)
)

Command line usage

# Basic transcription
whisper audio.mp3

# Specify model
whisper audio.mp3 --model turbo

# Output formats
whisper audio.mp3 --output_format txt     # Plain text
whisper audio.mp3 --output_format srt     # Subtitles
whisper audio.mp3 --output_format vtt     # WebVTT
whisper audio.mp3 --output_format json    # JSON with timestamps

# Language
whisper audio.mp3 --language Spanish

# Translation
whisper spanish.mp3 --task translate

Batch processing

import os

audio_files = ["file1.mp3", "file2.mp3", "file3.mp3"]

for audio_file in audio_files:
    print(f"Transcribing {audio_file}...")
    result = model.transcribe(audio_file)

    # Save to file
    output_file = audio_file.replace(".mp3", ".txt")
    with open(output_file, "w") as f:
        f.write(result["text"])

Real-time transcription

# For streaming audio, use faster-whisper
# pip install faster-whisper

from faster_whisper import WhisperModel

model = WhisperModel("base", device="cuda", compute_type="float16")

# Transcribe with streaming
segments, info = model.transcribe("audio.mp3", beam_size=5)

for segment in segments:
    print(f"[{segment.start:.2f}s -> {segment.end:.2f}s] {segment.text}")

GPU acceleration

import whisper

# Automatically uses GPU if available
model = whisper.load_model("turbo")

# Force CPU
model = whisper.load_model("turbo", device="cpu")

# Force GPU
model = whisper.load_model("turbo", device="cuda")

# 10-20× faster on GPU

Integration with other tools

Subtitle generation

# Generate SRT subtitles
whisper video.mp4 --output_format srt --language English

# Output: video.srt

With LangChain

from langchain.document_loaders import WhisperTranscriptionLoader

loader = WhisperTranscriptionLoader(file_path="audio.mp3")
docs = loader.load()

# Use transcription in RAG
from langchain_chroma import Chroma
from langchain_openai import OpenAIEmbeddings

vectorstore = Chroma.from_documents(docs, OpenAIEmbeddings())

Extract audio from video

# Use ffmpeg to extract audio
ffmpeg -i video.mp4 -vn -acodec pcm_s16le audio.wav

# Then transcribe
whisper audio.wav

Best practices

  1. Use turbo model - Best speed/quality for English
  2. Specify language - Faster than auto-detect
  3. Add initial prompt - Improves technical terms
  4. Use GPU - 10-20× faster
  5. Batch process - More efficient
  6. Convert to WAV - Better compatibility
  7. Split long audio - <30 min chunks
  8. Check language support - Quality varies by language
  9. Use faster-whisper - 4× faster than openai-whisper
  10. Monitor VRAM - Scale model size to hardware

Performance

ModelReal-time factor (CPU)Real-time factor (GPU)
tiny~0.32~0.01
base~0.16~0.01
turbo~0.08~0.01
large~1.0~0.05

Real-time factor: 0.1 = 10× faster than real-time

Language support

Top-supported languages:

  • English (en)
  • Spanish (es)
  • French (fr)
  • German (de)
  • Italian (it)
  • Portuguese (pt)
  • Russian (ru)
  • Japanese (ja)
  • Korean (ko)
  • Chinese (zh)

Full list: 99 languages total

Limitations

  1. Hallucinations - May repeat or invent text
  2. Long-form accuracy - Degrades on >30 min audio
  3. Speaker identification - No diarization
  4. Accents - Quality varies
  5. Background noise - Can affect accuracy
  6. Real-time latency - Not suitable for live captioning

Resources

Source

git clone https://github.com/Orchestra-Research/AI-Research-SKILLs/blob/main/18-multimodal/whisper/SKILL.mdView on GitHub

Overview

Whisper is OpenAI's general-purpose speech recognition model that supports 99 languages and can perform transcription, translation to English, and language identification. It comes in six model sizes from tiny to turbo, enabling robust, multilingual ASR for podcasts, meetings, and diverse audio tasks.

How This Skill Works

Load a Whisper model (e.g., whisper.load_model('base')), then call transcribe on an audio file. The API supports language auto-detection or explicit language, transcription or translation tasks, and optional features like initial_prompt and word_timestamps to improve accuracy and alignment. It is trained on hundreds of thousands of hours of audio to enable cross-language transcription and multilingual processing.

When to Use It

  • Speech-to-text transcription in 99 languages
  • Podcast or video transcription
  • Automating meeting notes from multilingual calls
  • Translating non-English audio to English text
  • Noisy audio or multilingual audio processing

Quick Start

  1. Step 1: Install the whisper package (pip install -U openai-whisper) and ffmpeg as required
  2. Step 2: Load a model (e.g., model = whisper.load_model('base')) and call model.transcribe('audio.mp3')
  3. Step 3: Access result['text'] for full transcription or iterate over result['segments'] for timings

Best Practices

  • Start with turbo for fastest results and good quality; switch to base for rapid prototyping
  • Auto-detect language when you don't know the language, but specify language to speed up inference
  • Use task='translate' when you need English output from non-English audio
  • Enable word_timestamps for precise subtitle timing and word-level alignment
  • Provide an informative initial_prompt to improve recognition of domain-specific terms

Example Use Cases

  • Transcribing a Spanish podcast to English text for a newsletter
  • Generating meeting notes from a multilingual team call
  • Creating subtitles for a noisy outdoor interview in multiple languages
  • Transcribing a conference talk and extracting quotes with timestamps
  • Translating a German lecture into English text for a worldwide audience

Frequently Asked Questions

Add this skill to your agents

Related Skills

llamaindex

Orchestra-Research/AI-Research-SKILLs

Data framework for building LLM applications with RAG. Specializes in document ingestion (300+ connectors), indexing, and querying. Features vector indices, query engines, agents, and multi-modal support. Use for document Q&A, chatbots, knowledge retrieval, or building RAG pipelines. Best for data-centric LLM applications.

clip

Orchestra-Research/AI-Research-SKILLs

OpenAI's model connecting vision and language. Enables zero-shot image classification, image-text matching, and cross-modal retrieval. Trained on 400M image-text pairs. Use for image search, content moderation, or vision-language tasks without fine-tuning. Best for general-purpose image understanding.

audiocraft-audio-generation

Orchestra-Research/AI-Research-SKILLs

PyTorch library for audio generation including text-to-music (MusicGen) and text-to-sound (AudioGen). Use when you need to generate music from text descriptions, create sound effects, or perform melody-conditioned music generation.

blip-2-vision-language

Orchestra-Research/AI-Research-SKILLs

Vision-language pre-training framework bridging frozen image encoders and LLMs. Use when you need image captioning, visual question answering, image-text retrieval, or multimodal chat with state-of-the-art zero-shot performance.

instructor

Orchestra-Research/AI-Research-SKILLs

Extract structured data from LLM responses with Pydantic validation, retry failed extractions automatically, parse complex JSON with type safety, and stream partial results with Instructor - battle-tested structured output library

llama-factory

Orchestra-Research/AI-Research-SKILLs

Expert guidance for fine-tuning LLMs with LLaMA-Factory - WebUI no-code, 100+ models, 2/3/4/5/6/8-bit QLoRA, multimodal support

Sponsor this space

Reach thousands of developers