Whisper
Collection
2 items โข Updated
This repository provides all model files needed to run whisper.cpp-mblt, the Mobilint NPU-accelerated fork of whisper.cpp.
| Model | File | Size | Description |
|---|---|---|---|
| whisper-small | ggml-small.bin |
466 MB | GGML model (tokenizer + weights for CPU fallback) |
| whisper-small | ggml-small-encoder.mxq |
93 MB | Mobilint NPU encoder |
| whisper-small | ggml-small-decoder.mxq |
159 MB | Mobilint NPU decoder |
# Download all files and run
whisper-cli-mblt \
-m ggml-small.bin \
--mxq-encoder ggml-small-encoder.mxq \
--mxq-decoder ggml-small-decoder.mxq \
-f audio.wav
# Or auto-download from HuggingFace
whisper-cli-mblt -hf mobilint/whisper-small -f audio.wav
The ggml-small.bin file is also compatible with standard whisper.cpp for CPU-only inference:
whisper-cli -m ggml-small.bin -f audio.wav
Apache 2.0 (same as the original OpenAI Whisper model)
Base model
openai/whisper-small