LM-Eval Benchmark Results

#2
by jajmangold - opened

LM-Eval Benchmark Results

Original: kaitchup/LFM2.5-1.2B-Thinking-autoround-W4A16 (quantized W4A16)

arc_challenge: acc=0.2099, acc_norm=0.2321
arc_easy: acc=0.3733, acc_norm=0.3641
hellaswag: acc=0.3239, acc_norm=0.3697
piqa: acc=0.5919, acc_norm=0.5658

DavidAU: DavidAU/LFM2.5-1.2B-Thinking-Claude-4.6-Opus-Heretic-Uncensored-DISTILL (FP16)

arc_challenge: acc=0.2125, acc_norm=0.2355
arc_easy: acc=0.3872, acc_norm=0.3645
hellaswag: acc=0.3185, acc_norm=0.3605
piqa: acc=0.5789, acc_norm=0.5686

Comparison Table

Task Original acc DavidAU acc Diff Original acc_norm DavidAU acc_norm Diff
arc_challenge 20.99% 21.25% +0.26% 23.21% 23.55% +0.34%
arc_easy 37.33% 38.72% +1.39% 36.41% 36.45% +0.04%
hellaswag 32.39% 31.85% -0.54% 36.97% 36.05% -0.92%
piqa 59.19% 57.89% -1.30% 56.58% 56.86% +0.28%

Conclusion: The models perform nearly identically. The DavidAU distill shows slight improvement on ARC tasks but slight regression on HellaSwag and PIQA. All differences are within the standard error margins (~1-1.2%).

Thank you for reminding me to update this card.
NOTE:

You must test the HERETIC versions ; and in BF16 - F16 will affect results.

I have posted the internal numbers:

arc_challenge,arc_easy,boolq,hellaswag,openbookqa,piqa,winogrande

[BASE - HERETIC] LFM2.5-1.2B-Thinking-q8,
0.352,0.418,0.656,0.476,0.366,0.681,0.508

THIS MODEL:
[0.356],[0.471],[0.691],[0.505],[0.386],[0.701],[0.539]

These are above base heretic model in all cases, far exceeding the margin of error.

For reference; this is the base NON heretic model:
[BASE] LFM2.5-1.2B-Thinking-q8, 0.365, 0.426, 0.717, 0.486, 0.382, 0.687, 0.538

Generally HERETIC'ing a model results in minor losses from the root model's metrics.
We test all versions to see how they perform.

This version is HERETIC and matches or exceeds even the BASE non-heretic model:
LFM2.5-1.2B-Thinking-Polaris-Heretic-Uncensored-DISTILL q8 [0.365],[0.532],[0.708],[0.507],0.356,[0.696],[0.535]

In larger parameter models, heretic versions can and do exceed ROOT , org, non heretic models benchmarks regularly.

Sign up or log in to comment