Get the FREE Ultimate OpenClaw Setup Guide →

MCP-YouTube-Transcribe

An AI-powered MCP server for fetching/generating YouTube video transcripts.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio jackhp-mcp-youtube-transcribe python mcp_server.py \
  --env LOG_FILE="mcp_server.log (default in project root)"

How to use

MCP-YouTube-Transcribe is an MCP server that provides a tool to fetch transcripts for YouTube videos. It first tries to retrieve an official YouTube transcript (manual or auto-generated) for the queried video, ensuring quick and accurate results when possible. If an official transcript isn’t available, the server will download the video's audio and perform transcription locally using Whisper, with Whisper.cpp preferred for speed if installed, and falling back to OpenAI's Python Whisper model if needed. The functionality is exposed as a simple tool named get_youtube_transcript via the MCP server interface, allowing you to request transcripts through a JSON-RPC workflow. You can connect this MCP server to your workflow or to Gemini CLI to invoke the transcription capability from your terminal or automation scripts.

To use it, start the MCP server with python mcp_server.py (as described in the installation notes). Once running, you can call the get_youtube_transcript tool by sending a JSON-RPC request containing the video query or YouTube URL. If whisper.cpp is installed and a suitable model is available in the models folder, transcription will be fast; otherwise, the server will fall back to the Python Whisper model. The server also supports YouTube video search to identify the most relevant video when you provide a text query rather than a direct URL.

In short: provide a query or URL, the server finds the best matching video, tries to fetch an official transcript first, and if needed transcribes the audio locally with Whisper for a complete transcript output.

How to install

Prerequisites:

  • Python 3.12+
  • uv (Python package installer and resolver)
  • FFmpeg (required for audio processing)
  • whisper.cpp (highly recommended for fast local transcription)
  • Optional: Whisper model files (e.g., tiny model) placed in a models/ directory

Step-by-step installation:

  1. Install Python 3.12+ from https://www.python.org/downloads/
  2. Install uv (global or in a virtual environment):
python -m pip install uv
  1. Install FFmpeg and verify it's in your PATH:
  • macOS: brew install ffmpeg
  • Linux: follow your distro's package manager instructions
  • Windows: ensure ffmpeg.exe is accessible from the command line
  1. Install whisper.cpp (recommended) and ensure whisper-cli is in PATH:
  • macOS/Linux: follow whisper.cpp installation guide
  • Windows: install and ensure whisper-cli is accessible
  1. Clone the repository and set up dependencies:
git clone https://github.com/<your-username>/YouTubeTranscriber.git
cd YouTubeTranscriber
uv venv
uv sync
  1. Verify the MCP server script is available. You can start the server with:
python mcp_server.py

The server will log activity to mcp_server.log in the project root.

Additional notes

Tips and common issues:

  • Ensure whisper.cpp is installed and the whisper-cli command is in your PATH for best performance.
  • FFmpeg must be installed and accessible from your system PATH; without it, audio extraction and processing will fail.
  • If using the Python Whisper fallback, have an OpenAI API key available if required by your setup. You may need to configure API keys via environment variables as needed by your Whisper integration.
  • When running behind proxies or restricted networks, ensure that the YouTube transcript fetching and video search components can reach YouTube endpoints.
  • You can customize the location of the models directory by placing model files in a models/ folder at the repository root (e.g., models/ggml-tiny.bin).
  • For Gemini CLI integration on Windows, use the provided run_server.bat wrapper and ensure the path to your project is correctly configured in config.json.

Related MCP Servers

Sponsor this space

Reach thousands of developers