Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
1
59
ginigen-ai
ginigen-ai
Follow
openfree's profile picture
seawolf2357's profile picture
mayafree's profile picture
7 followers
ยท
78 following
AI & ML interests
None yet
Recent Activity
upvoted
an
article
about 3 hours ago
๐๏ธ Smol AI WorldCup: A 5-Axis Benchmark That Reveals What Small Language Models Can Really Do
reacted
to
SeaWolf-AI
's
post
with ๐ฅ
about 3 hours ago
๐๏ธ Smol AI WorldCup: A 4B Model Just Beat 8B โ Here's the Data We evaluated 18 small language models from 12 makers on 125 questions across 7 languages. The results challenge the assumption that bigger is always better. Community Article: https://huggingface.co/blog/FINAL-Bench/smol-worldcup Live Leaderboard: huggingface.co/spaces/ginigen-ai/smol-worldcup Dataset: huggingface.co/datasets/ginigen-ai/smol-worldcup What we found: โ Gemma-3n-E4B (4B, 2GB RAM) outscores Qwen3-8B (8B, 5.5GB). Doubling parameters gained only 0.4 points. RAM cost: 2.75x more. โ GPT-OSS-20B fits in 1.5GB yet matches Champions-league dense models requiring 8.5GB. MoE architecture is the edge AI game-changer. โ Thinking models hurt structured output. DeepSeek-R1-7B scores 8.7 points below same-size Qwen3-8B and runs 2.7x slower. โ A 1.3B model fabricates confident fake content 80% of the time when prompted with nonexistent entities. Qwen3 family hits 100% trap detection across all sizes. โ Qwen3-1.7B (1.2GB) outscores Mistral-7B, Llama-3.1-8B, and DeepSeek-R1-14B. Latest architecture at 1.7B beats older architecture at 14B. What makes this benchmark different? Most benchmarks ask "how smart?" โ we measure five axes simultaneously: Size, Honesty, Intelligence, Fast, Thrift (SHIFT). Our ranking metric WCS = sqrt(SHIFT x PIR_norm) rewards models that are both high-quality AND efficient. Smart but massive? Low rank. Tiny but poor? Also low. Top 5 by WCS: 1. GPT-OSS-20B โ WCS 82.6 โ 1.5GB โ Raspberry Pi tier 2. Gemma-3n-E4B โ WCS 81.8 โ 2.0GB โ Smartphone tier 3. Llama-4-Scout โ WCS 79.3 โ 240 tok/s โ Fastest model 4. Qwen3-4B โ WCS 76.6 โ 2.8GB โ Smartphone tier 5. Qwen3-1.7B โ WCS 76.1 โ 1.2GB โ IoT tier Built in collaboration with the FINAL Bench research team. Interoperable with ALL Bench Leaderboard for full small-to-large model comparison. Dataset is open under Apache 2.0 (125 questions, 7 languages). We welcome new model submissions.
liked
a dataset
about 5 hours ago
ginigen-ai/smol-worldcup
View all activity
Organizations
None yet
spaces
1
Running
8
Smol Worldcup
๐
Benchmark Evaluation for Small LLMs
models
0
None public yet
datasets
1
ginigen-ai/smol-worldcup
Viewer
โข
Updated
about 3 hours ago
โข
125
โข
7