Sherpa-ONNX Whisper Tiny โ English
OpenAI's Whisper Tiny model converted to ONNX format for sherpa-onnx, packaged for use with the RunAnywhere SDK.
Format: tar.gz archive (~146 MB) containing encoder + decoder ONNX files + tokens
Usage with RunAnywhere SDK
Swift (iOS / macOS)
import RunAnywhere
RunAnywhere.registerModel(
id: "sherpa-onnx-whisper-tiny.en",
name: "Whisper Tiny English (ONNX)",
url: URL(string: "https://huggingface.co/runanywhere/sherpa-onnx-whisper-tiny.en/resolve/main/sherpa-onnx-whisper-tiny.en.tar.gz")!,
framework: .onnx,
modality: .speechRecognition,
artifactType: .archive(.tarGz, structure: .nestedDirectory),
memoryRequirement: 75_000_000
)
// Transcribe audio
let result = try await RunAnywhere.transcribe(audioData, modelId: "sherpa-onnx-whisper-tiny.en")
print(result.text)
Kotlin (Android / JVM)
import com.runanywhere.sdk.RunAnywhere
import com.runanywhere.sdk.models.*
RunAnywhere.registerModel(
id = "sherpa-onnx-whisper-tiny.en",
name = "Whisper Tiny English (ONNX)",
url = "https://huggingface.co/runanywhere/sherpa-onnx-whisper-tiny.en/resolve/main/sherpa-onnx-whisper-tiny.en.tar.gz",
framework = InferenceFramework.ONNX,
modality = ModelCategory.SPEECH_RECOGNITION,
memoryRequirement = 75_000_000L
)
val result = RunAnywhere.transcribe(audioData, modelId = "sherpa-onnx-whisper-tiny.en")
println(result.text)
Web (TypeScript)
import { RunAnywhere, LLMFramework, ModelCategory } from '@anthropic/runanywhere-web';
RunAnywhere.registerModels([{
id: 'sherpa-onnx-whisper-tiny.en',
name: 'Whisper Tiny English (ONNX)',
url: 'https://huggingface.co/runanywhere/sherpa-onnx-whisper-tiny.en/resolve/main/sherpa-onnx-whisper-tiny.en.tar.gz',
framework: LLMFramework.ONNX,
modality: ModelCategory.SpeechRecognition,
memoryRequirement: 75_000_000,
artifactType: 'archive',
}]);
await RunAnywhere.downloadModel('sherpa-onnx-whisper-tiny.en');
await RunAnywhere.loadModel('sherpa-onnx-whisper-tiny.en');
const result = await RunAnywhere.transcribe(audioData, 'sherpa-onnx-whisper-tiny.en');
console.log(result.text);
React Native (TypeScript)
import { RunAnywhere } from 'runanywhere-react-native';
RunAnywhere.registerModel({
id: 'sherpa-onnx-whisper-tiny.en',
name: 'Whisper Tiny English (ONNX)',
url: 'https://huggingface.co/runanywhere/sherpa-onnx-whisper-tiny.en/resolve/main/sherpa-onnx-whisper-tiny.en.tar.gz',
framework: 'onnx',
modality: 'speechRecognition',
memoryRequirement: 75_000_000,
});
const result = await RunAnywhere.transcribe(audioData, 'sherpa-onnx-whisper-tiny.en');
Flutter (Dart)
import 'package:runanywhere_flutter/runanywhere_flutter.dart';
RunAnywhere.registerModel(
id: 'sherpa-onnx-whisper-tiny.en',
name: 'Whisper Tiny English (ONNX)',
url: 'https://huggingface.co/runanywhere/sherpa-onnx-whisper-tiny.en/resolve/main/sherpa-onnx-whisper-tiny.en.tar.gz',
framework: InferenceFramework.onnx,
modality: ModelCategory.speechRecognition,
memoryRequirement: 75000000,
);
final result = await RunAnywhere.transcribe(audioData, 'sherpa-onnx-whisper-tiny.en');
print(result.text);
Model Details
| Property | Value |
|---|---|
| Base Model | OpenAI Whisper Tiny |
| Language | English |
| Parameters | 39M |
| Format | ONNX (encoder + decoder) |
| Runtime | sherpa-onnx |
Attribution
Based on OpenAI Whisper. ONNX conversion by csukuangfj/sherpa-onnx.