Whisper tflite models for use in Whisper app on F-Droid

"transcribe-translate" models provide signatures for "serving_transcribe" and "serving_translate" to force the model to perform a certain action

@tf.function(
    input_signature=[
        tf.TensorSpec((1, 80, 3000), tf.float32, name="input_features"),
    ],
)
def transcribe(self, input_features):
    outputs = self.model.generate(
        input_features,
        max_new_tokens=450,  # change as needed
        return_dict_in_generate=True,
        forced_decoder_ids=[[2, 50359], [3, 50363]],  # forced to transcribe any language with no timestamps
    )
    return {"sequences": outputs["sequences"]}

@tf.function(
    input_signature=[
        tf.TensorSpec((1, 80, 3000), tf.float32, name="input_features"),
    ],
)
def translate(self, input_features):
    outputs = self.model.generate(
        input_features,
        max_new_tokens=450,  # change as needed
        return_dict_in_generate=True,
        forced_decoder_ids=[[2, 50358], [3, 50363]],  # forced to translate any language with no timestamps
    )
    return {"sequences": outputs["sequences"]}

In order to force transcription for a certain language set the 1. decoder id as shown below:

def transcribe(self, input_features):
    outputs = self.model.generate(
        input_features,
        max_new_tokens=450,  # change as needed
        return_dict_in_generate=True,
        forced_decoder_ids=[[1, 50261], [2, 50359], [3, 50363]],  # forced to transcribe (50359) German (50261) with no timestamps (50363)
    )
    return {"sequences": outputs["sequences"]}

def translate(self, input_features):
    outputs = self.model.generate(
        input_features,
        max_new_tokens=450,  # change as needed
        return_dict_in_generate=True,
        forced_decoder_ids=[[1, 50261], [2, 50358], [3, 50363]],  # different forced_decoder_ids
    )
    return {"sequences": outputs["sequences"]}

(language codes from here: https://github.com/woheller69/whisperIME/blob/master/app/src/main/java/com/whispertflite/utils/InputLang.java)


The models are based on:

@misc{radford2022whisper,
  doi = {10.48550/ARXIV.2212.04356},
  url = {https://arxiv.org/abs/2212.04356},
  author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
  title = {Robust Speech Recognition via Large-Scale Weak Supervision},
  publisher = {arXiv},
  year = {2022},
  copyright = {arXiv.org perpetual, non-exclusive license}
}

Conversion to tflite is based on:

@misc{nyadla-sys,
  author={Niranjan Yadla},
  title={{Whisper TFLite: OpenAI Whisper Model Port for Edge Devices}},
  year=2022,
  howpublished={GitHub Repository},
  url={https://github.com/nyadla-sys/whisper.tflite},
  note={Original TFLite implementation of OpenAI Whisper for on-device automatic speech recognition}
}
Downloads last month
10,845
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for DocWolle/whisper_tflite_models