Instructions to use kaiinui/kotoba-whisper-v2.0-mlx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use kaiinui/kotoba-whisper-v2.0-mlx with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir kotoba-whisper-v2.0-mlx kaiinui/kotoba-whisper-v2.0-mlx
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
kotoba-whisper-v2.0-mlx
This repository contains a converted mlx-whisper model of kotoba-whisper-v2.0 which is suitable for running with Apple Silicon.
As kotoba-whisper-v2.0 is derived from distil-large-v3, this model is significantly faster than mlx-community/whisper-large-v3-mlx without losing much accuracy for Japanese transcription.
Usage
pip install mlx-whisper
mlx_whisper.transcribe(speech_file, path_or_hf_repo="kaiinui/kotoba-whisper-v2.0-mlx")
Related Links
- kotoba-whisper-v2.0 (The original model)
- mlx-whisper
- Downloads last month
- 252
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for kaiinui/kotoba-whisper-v2.0-mlx
Base model
kotoba-tech/kotoba-whisper-v2.0