Dataset Viewer
Auto-converted to Parquet Duplicate
model
string
family
string
size
string
language
string
rtf
float64
n
int64
n_pairs
int64
wer_pct
float64
cer_pct
float64
sfr_mean
float64
sfr_full_pct
float64
sfr_zero_pct
float64
sfr_empty_n
int64
dom_devanagari
float64
dom_latin
int64
dom_bengali
float64
dom_other
float64
dom_tamil
float64
dom_malayalam
float64
dom_georgian
float64
dom_arabic_dari_urdu
float64
dom_pashto
float64
unsloth/gemma-4-E2B-it
Gemma4
E2B
hindi
0.1599
418
418
16.55
9.39
94.07
74.2
4.5
0
399
19
null
null
null
null
null
null
null
unsloth/gemma-4-E2B-it
Gemma4
E2B
bengali
0.1315
920
920
32.13
16.03
89.27
56.5
5.7
0
36
37
841
5
1
null
null
null
null
unsloth/gemma-4-E2B-it
Gemma4
E2B
malayalam
0.163
958
958
62.58
39.07
63.52
33.6
15
0
66
83
10
82
124
593
null
null
null
unsloth/gemma-4-E2B-it
Gemma4
E2B
somali
0.2188
1,019
1,019
60.78
22.95
99.56
88.3
0
0
null
1,019
null
null
null
null
null
null
null
unsloth/gemma-4-E2B-it
Gemma4
E2B
georgian
0.2288
979
979
89.15
76.25
19.51
6
69.3
0
4
490
5
268
8
4
195
3
2
unsloth/gemma-4-E2B-it
Gemma4
E2B
pashto
0.1961
512
512
78.1
45.84
76.56
60
15.8
0
60
24
7
19
1
null
null
55
346
unsloth/gemma-4-E2B-it
Gemma4
E2B
urdu
0.2106
299
299
96.38
82.34
6.46
3
61.5
0
271
15
null
null
null
null
null
12
1
unsloth/gemma-4-E2B-it
Gemma4
E2B
arabic
0.1762
428
428
11.95
5.74
97.17
81.5
1.9
0
null
9
null
null
null
null
null
419
null
unsloth/gemma-4-E2B-it
Gemma4
E2B
persian
0.1327
871
871
22.11
9.98
95.4
76.6
3.3
0
null
32
null
null
null
null
null
836
3
unsloth/gemma-4-E2B-it
Gemma4
E2B
tamil
0.1642
591
591
62.56
36.74
70.08
30.3
14.7
0
63
30
3
26
420
49
null
null
null

Script fidelity benchmark

Anonymous supplement for the paper "Script collapse in multilingual ASR: A reference-free metric and 100-pair benchmark."

Script Fidelity Rate (SFR) measures the fraction of ASR hypothesis characters that belong to the expected target script. WER measures word edits, while SFR checks whether the output is written in the target orthography.

Related resources:

Scope

The paper benchmark contains 100 evaluated model-language pairs on FLEURS test splits:

Language Script FLEURS code Role
Pashto Perso-Arabic ps_af collapse target
Urdu Perso-Arabic ur_pk control
Arabic Perso-Arabic ar_eg control
Persian Perso-Arabic fa_ir control
Hindi Devanagari hi_in collapse target
Bengali Bengali bn_in collapse target
Malayalam Malayalam ml_in collapse target
Tamil Tamil ta_in extension
Somali Latin so_so collapse target
Georgian Georgian ka_ge collapse target

Evaluated models:

Family Model identifiers
Whisper openai/whisper-tiny, openai/whisper-base, openai/whisper-small, openai/whisper-medium, openai/whisper-large-v2, openai/whisper-large-v3, openai/whisper-large-v3-turbo
MMS-1B facebook/mms-1b-all
SeamlessM4T-v2 facebook/seamless-m4t-v2-large
Gemma 4 unsloth/gemma-4-E2B-it

All ten languages, including Pashto, are evaluated directly on FLEURS.

Repository layout

scripts/
  eval_multilang.py      Main evaluation driver
  script_fidelity.py     SFR metric implementation
  merge_gemma4.py        Merges Gemma 4 results and regenerates figures
  eval_downstream_mt.py  Downstream MT validation for Gemma 4 outputs
  eval_sfr_lid_hybrid.py SFR+LID audit for saved Gemma 4 outputs
  run_gemma4.sh          Convenience wrapper for the Gemma 4 baseline run
analysis/
  sf_results.csv         Main result table plus six supplemental Pashto rows
  gemma4_downstream_mt_summary.csv        MT validation summary
  gemma4_downstream_mt_correlations.csv   MT validation correlations
  sfr_lid_hybrid_summary.csv              SFR+LID audit summary
results_gemma4/
  sf_results.csv         Gemma 4 result table
results_gemma4_prompt_mitigation/
  sf_results.csv         Gemma 4 script-aware prompt result table
results_gemma4_downstream_mt/
  translations/          NLLB translations for gold, baseline, and script-hint text
figures/
  sfr_heatmap.pdf        SFR matrix figure
  wer_vs_sfr_scatter.pdf WER-vs-SFR figure
croissant_metadata.json  Artifact metadata with Responsible AI fields
LICENSE                 CC-BY-4.0 license notice
requirements.txt         Python package requirements

The general SFR library is published separately as script-fidelity on PyPI: https://pypi.org/project/script-fidelity/. The import name is script_fidelity. This artifact keeps a bundled scripts/script_fidelity.py for exact reproduction of the submitted benchmark.

The same metric is available as a Hugging Face Evaluate community metric: https://huggingface.co/spaces/themechanism/script_fidelity_rate.

Setup

Python 3.10 or newer is recommended. Use uv for all environment and package management.

uv venv
uv pip install torch --index-url https://download.pytorch.org/whl/cu121
uv pip install -r requirements.txt

datasets==2.21.0 is pinned because google/fleurs still uses a dataset script. Install CUDA-compatible torch from the cu121 index on common cloud GPU images. The scripts route Hugging Face, evaluate, and temporary files to a writable cache root. Set SFR_CACHE_ROOT=/path/to/cache to override the default.

Running the evaluation

Full ASR inference is expensive and was already completed for the submitted results. The commands below reproduce the run configuration.

uv run python scripts/eval_multilang.py \
    --hf-token "$HF_TOKEN" \
    --results-dir ./analysis \
    --hub-repo "$ANONYMIZED_OUTPUT_REPO" \
    --languages pashto hindi bengali malayalam somali georgian urdu arabic persian tamil \
    --whisper-sizes tiny base small medium large-v2 large-v3 turbo \
    --run-mms --run-seamless

To refresh an existing model-language row in sf_results.csv, add --force.

Gemma 4 was run separately with instruction-following transcription:

uv run python scripts/eval_multilang.py \
    --results-dir ./results_gemma4 \
    --languages pashto urdu arabic persian hindi bengali malayalam tamil somali georgian \
    --run-gemma4 \
    --whisper-sizes

Gemma 4 uses the prompt:

Transcribe the following speech segment in its original language. Follow these specific instructions for formatting the answer:
* Only output the transcription, with no newlines.
* When transcribing numbers, write the digits, i.e. write 1.7 and not one point seven, and write 3 instead of three.

Whisper uses forced language tokens and greedy decoding (num_beams=1). Whisper, MMS-1B, and SeamlessM4T use float16 on CUDA. Gemma 4 uses bfloat16 on Apple MPS.

Gemma 4 script-aware prompting

The mitigation experiment compares the baseline Gemma 4 prompt above against a script-aware prompt on the same ten FLEURS test splits. The script-aware arm uses:

Transcribe the following speech segment in {language_name}. Use {script_name} script only. Do not translate, romanize, or add explanations.
Only output the transcription, with no newlines.
When transcribing numbers, write the digits, i.e. write 1.7 and not one point seven, and write 3 instead of three.

Run the experiment after results_gemma4/sf_results.csv contains all ten Gemma baseline rows:

uv run python scripts/eval_gemma4_prompt_mitigation.py

The script writes:

results_gemma4_prompt_mitigation/sf_results.csv
results_gemma4_prompt_mitigation/predictions/gemma4_script_hint_{language}_predictions.json
analysis/gemma4_prompt_mitigation_summary.csv

Downstream MT validation

The downstream check asks whether script errors damage a later text pipeline. It translates gold FLEURS transcripts, baseline Gemma 4 ASR outputs, and script-aware Gemma 4 ASR outputs into English, then scores chrF and BLEU against the aligned English FLEURS reference. The default MT model is facebook/nllb-200-distilled-600M, which supports FLORES-style language codes for the ten paper languages.

Run the deadline-friendly diagnostic subset:

uv run python scripts/eval_downstream_mt.py \
    --max-examples-per-language 100 \
    --sample-mode stratified_sfr

Run the full aligned set when time permits:

uv run python scripts/eval_downstream_mt.py \
    --max-examples-per-language 0 \
    --sample-mode random

The script writes:

results_gemma4_downstream_mt/translations/{mt_model}_{language}_{variant}_translations.json
analysis/gemma4_downstream_mt_summary.csv
analysis/gemma4_downstream_mt_utterances.csv
analysis/gemma4_downstream_mt_correlations.csv

Use NLLB as the primary MT model. Gemma can be used as a secondary sensitivity check, but it should not be the main downstream evaluator because Gemma also produces the ASR outputs in this experiment.

SFR+LID hybrid audit

The hybrid audit runs language identification over saved Gemma 4 outputs only. It does not rerun ASR.

uv run python scripts/eval_sfr_lid_hybrid.py

The script writes:

analysis/sfr_lid_hybrid_summary.csv
analysis/sfr_lid_hybrid_utterances.csv

Regenerating figures

uv run python scripts/merge_gemma4.py

This command merges Gemma 4 rows into analysis/sf_results.csv and regenerates the heatmap and scatter figures in figures/. The plotting code filters the main figures to the 100 model-language pairs reported in the paper.

Result fields

analysis/sf_results.csv has one row per evaluated model-language pair. The paper's 100-pair matrix is the subset with family in Whisper, MMS, SeamlessM4T, or Gemma4. The six rows with family=unknown are supplemental Pashto-only comparisons and are not used in the paper denominator, family summaries, heatmap, or collapse counts.

Column Meaning
model Hugging Face model identifier
family Model family used for paper grouping
language Target language
wer_pct Word error rate after language-specific normalisation
cer_pct Character error rate after language-specific normalisation
sfr_mean Mean utterance-level Script Fidelity Rate, in percent
sfr_full_pct Percent of utterances with SFR = 100%
sfr_zero_pct Percent of utterances with SFR = 0%
dom_* Dominant-script utterance counts

SFR library

For standalone use outside this artifact, install the published package:

uv add script-fidelity

Then import it as script_fidelity:

from script_fidelity import compute_sfr

compute_sfr("کابل کې ښه هوا ده", language="ps_af")
compute_sfr("kabul ke sha hawa da", language="pashto")
compute_sfr("नमस्ते", language="hindi")

One-off CLI use:

uvx --from script-fidelity sfr score --language ps_af --text "کابل کې ښه هوا ده"

Hugging Face Evaluate use:

import evaluate

sfr = evaluate.load("themechanism/script_fidelity_rate", module_type="metric")
sfr.compute(
    predictions=["کابل کې ښه هوا ده", "romanized output"],
    language="ps_af",
)

The validation script has already been run for the submitted artifact. It checks known positive and negative examples for Pashto, Hindi, and Somali.

Licenses and responsible release

FLEURS is CC BY 4.0. Whisper is MIT licensed. MMS-1B and SeamlessM4T-v2 are released under Meta's research licenses. Gemma 4 E2B is Apache-2.0 through the evaluated checkpoint. This supplement releases code, metadata, and evaluation outputs only; it releases no model weights and no new speech recordings.

See croissant_metadata.json for artifact metadata, intended use, limitations, PII status, and Responsible AI fields.

Anonymous citation

@misc{Anonymous2026ScriptFidelity,
  author = {Anonymous},
  title = {Script collapse in multilingual ASR: A reference-free metric and 100-pair benchmark},
  year = {2026},
  note = {Anonymous submission}
}
Downloads last month
126

Space using themechanism/script-fidelity-benchmark 1