Get the FREE Ultimate OpenClaw Setup Guide →

mcp -funasr

MCPServer is a Python-based server that leverages Alibaba's FunASR library to provide speech processing services through the FastMCP framework.

Installation
Run this command in your terminal to add the MCP server to Claude Code.
Run in terminal:
Command
claude mcp add --transport stdio radial-hks-mcp-server-funasr uvx radial-hks-mcp-server-funasr

How to use

This MCP server exposes FunASR-powered speech processing capabilities via the FastMCP framework. It supports audio validation, asynchronous speech transcription, and voice activity detection (VAD), with multi-model support and dynamic loading of ASR and VAD models. You can validate audio files to ensure they are readable and correctly formatted, start asynchronous transcription tasks with optional per-task model selection and generation parameters, and query for task status or retrieve results. The server is designed to load models on demand and can switch between different ASR and VAD models as needed for different requests.

How to install

Prerequisites:

  • Python 3.8+
  • pip
  1. Clone the repository or navigate to the MCPServer directory that contains this README and the server code.
  2. Create a virtual environment and activate it:
    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
    
  3. Install dependencies:
    pip install -r requirements.txt
    
    This installs FastMCP, FunASR, and their dependencies. If you need a specific PyTorch setup (e.g., CUDA-enabled), install PyTorch manually prior to running the server as per the official PyTorch instructions.
  4. Run the server:
    uvicorn main:app --host 0.0.0.0 --port 9000
    
    The server will start and be accessible at http://0.0.0.0:9000. On first run, FunASR will download default ASR and VAD models, which may take some time.

Additional notes

Tips and notes:

  • Environment variable: MODELSCOPE_API_TOKEN is optional but may be required if you access private models or face rate limits. Set it in your environment if needed, e.g., export MODELSCOPE_API_TOKEN="YOUR_TOKEN_HERE".
  • The server supports per-request model selection for transcription (model_name) and per-request generation parameters (model_generate_kwargs). Models can be loaded or switched dynamically.
  • If you encounter large model downloads, ensure the network allows access to ModelScope and related model repositories.
  • The default ASR/VAD models are downloaded on the first run. You can explicitly configure which models to load by adjusting server configuration or environment variables if supported by your deployment.
  • For GPU-enabled environments, ensure PyTorch with CUDA is installed prior to running the server for optimal performance.

Related MCP Servers

Sponsor this space

Reach thousands of developers