Training Evaluation: whisper-small-pilotgpt-unified-all-data-lowercase-data-prep-6772
Evaluation results comparing base model vs fine-tuned model.
Summary
| Model | WER |
|---|---|
| openai/whisper-small (base) | 53.69% |
| Trelis/whisper-small-pilotgpt-unified-all-data-lowercase-data-prep-6772 (fine-tuned) | 27.54% |
Improvement: 26.15% WER reduction (lower is better)
Source Data
- Evaluation Dataset: Trelis/pilotgpt-test-0.5s
- Base Model: openai/whisper-small
- Fine-tuned Model: Trelis/whisper-small-pilotgpt-unified-all-data-lowercase-data-prep-6772
Columns
| Column | Description |
|---|---|
audio |
Audio sample (if available from source dataset) |
reference |
Ground truth transcription |
base_prediction |
Base model prediction |
base_wer |
Base model WER for this sample |
finetuned_prediction |
Fine-tuned model prediction |
finetuned_wer |
Fine-tuned model WER for this sample |
Generated by Trelis Studio
- Downloads last month
- 4