CTranslate2 Conversion of bond005/whisper-podlodka-turbo

This repository contains a CTranslate2-converted version of the bond005/whisper-podlodka-turbo model, optimized for use with the faster-whisper library.

This is a full conversion, including all necessary configuration files (tokenizer.json, preprocessor_config.json, etc.) for maximum compatibility.

Model Details

  • Original Model: bond005/whisper-podlodka-turbo (fine-tuned version of Whisper-Large-V3-Turbo)
  • Format: CTranslate2
  • Quantization: float16

This conversion was performed to enable high-performance inference on both CPU and GPU using faster-whisper.

Disclaimer

This is only a conversion of the original model. All credits for training and architecture go to the original author, bond005.

Downloads last month
10
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ytrbqrkflbvbhy/whisper-podlodka-turbo-ct2-float16

Finetuned
(3)
this model