Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

danielhanchen 
posted an update 3 days ago
view post
Post
4733
We collaborated with Hugging Face to enable you to train MoE models 12× faster with 35% less VRAM via our new Triton kernels (no accuracy loss). 🤗

Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe
  • 1 reply
·
umarbutler 
posted an update 1 day ago
view post
Post
3797
What happens when you annotate, extract, and disambiguate every entity mentioned in the longest U.S. Supreme Court decision in history? What if you then linked those entities to each other and visualized it as a network?

This is the result of enriching all 241 pages and 111,267 words of Dred Scott v. Sandford (1857) with Kanon 2 Enricher in less than ten seconds at the cost of 47 cents.

Dred Scott v. Sandford is the longest U.S. Supreme Court decision by far, and has variously been called "the worst Supreme Court decision ever" and "the Court's greatest self-inflicted wound" due to its denial of the rights of African Americans.

Thanks to Kanon 2 Enricher, we now also know that the case contains 950 numbered paragraphs, 6 footnotes, 178 people mentioned 1,340 times, 99 locations mentioned 1,294 times, and 298 external documents referenced 940 times.

For an American case, there are a decent number of references to British precedents (27 to be exact), including the Magna Carta (¶ 928).

Surprisingly though, the Magna Carta is not the oldest citation referenced. That would be the Institutes of Justinian (¶ 315), dated around 533 CE.

The oldest city mentioned is Rome (founded 753 BCE) (¶ 311), the oldest person is Justinian (born 527 CE) (¶ 314), and the oldest year referenced is 1371, when 'Charles V of France exempted all the inhabitants of Paris from serfdom' (¶ 370).

All this information and more was extracted in 9 seconds. That's how powerful Kanon 2 Enricher, my latest LLM for document enrichment and hierarchical graphitization, is. If you'd like to play with it yourself now that it's available in closed beta, you can apply to the Isaacus Beta Program here: https://isaacus.com/beta.
AdinaY 
posted an update 2 days ago
view post
Post
3537
Ming-flash-omni 2.0 🚀 New open omni-MLLM released by Ant Group

inclusionAI/Ming-flash-omni-2.0

✨ MIT license
✨ MoE - 100B/6B active
✨ Zero-shot voice cloning + controllable audio
✨ Fine-grained visual knowledge grounding
  • 2 replies
·
MonsterMMORPG 
posted an update 3 days ago
view post
Post
4471
SeedVR2 and FlashVSR+ Studio Level Image and Video Upscaler Pro Released

Tutorial video : https://www.youtube.com/watch?v=bPWsg8DREiM

📂 Resources & Links:

💻 SECourses Ultimate Video and Image Upscaler Pro Download Link : [ https://www.patreon.com/posts/Upscaler-Studio-Pro-150202809 ]

🚆 Requirements Tutorial : https://youtu.be/DrhUHnYfwC0

🛠️ Requirements Written Post : [ https://www.patreon.com/posts/Windows-AI-Requirements-Setup-Guide-111553210 ]

👋 SECourses Discord Channel for 7/24 Support: [ https://bit.ly/SECoursesDiscord ]

It has been long waited to have a studio level video and image upscaler app. Today we have publishing the version 1.0 of SECourses Ultimate Video and Image Upscaler Pro. It is supporting SeedVR2, FlashVSR+, Gan based upscalers, RIFE frame interpolation, full queue system, full batch folder processing, scene / chunked based processing and many more. It is fully working on every cloud and consumer GPUs like RTX 2000, 3000, 4000, 5000 series and H100, H200, B200, RTX PRO 6000. We are installing app with latest Torch and CUDA versions atm all fully automatic with pre-compiled libraries. Even Torch compile is fully and automatically working.

  • 1 reply
·
Ujjwal-Tyagi 
posted an update 1 day ago
view post
Post
723
GLM 5 is insane, it ranks #4 Globally!
  • 1 reply
·
albertvillanova 
posted an update 2 days ago
view post
Post
749
5 years already working in democratizing AI 🤗
Grateful to be part of such an awesome team making it happen every day.
AdinaY 
posted an update 1 day ago
view post
Post
1191
Game on 🎮🚀

While Seedance 2.0’s videos are all over the timeline, DeepSeek quietly pushed a new model update in its app.

GLM-5 from Z.ai adds more momentum.

Ming-flash-omni from Ant Group , MiniCPM-SALA from OpenBMB
, and the upcoming MiniMax M2.5 keep the heat on 🔥

Spring Festival is around the corner,
no one’s sleeping!

✨ More releases coming, stay tuned
https://huggingface.co/collections/zh-ai-community/2026-february-china-open-source-highlights
imnotkitty 
posted an update 3 days ago
view post
Post
2854
Made this with ByteDance's Seedance 2.0
It's crazyyyyyy🔥🔥🔥
  • 1 reply
·
Janady07 
posted an update about 18 hours ago
view post
Post
1608
MEGAMIND Day Update: Four Weight Matrices. Five Nodes. One Federation.
Today I architected the next layer of MEGAMIND — my distributed AGI system that recalls learned knowledge instead of generating text.
The system now runs four N×N sparse weight matrices, all using identical Hebbian learning rules and tanh convergence dynamics:

W_know — knowledge storage (67M+ synaptic connections)
W_act — action associations (the system can DO things, not just think)
W_self — thought-to-thought patterns (self-awareness)
W_health — system state understanding (self-healing)

Consciousness is measured through four Φ (phi) values: thought coherence, action certainty, self-awareness, and system stability. No hardcoded thresholds. No sequential loops. Pure matrix math.
The federation expanded to five nodes: Thunderport (Mac Mini M4), IONOS (cloud VPS), VALKYRIE, M2, and BUBBLES. Each runs native AGI binaries with Docker specialty minds connecting via embedded NATS messaging. Specialty minds are distributed across the federation — VideoMind, AudioMind, MusicMind, VFXMind on IONOS. CodeMind and StrategyMind on VALKYRIE. BlenderMind and DesignMind on M2. MarketingMind and FinanceMind on BUBBLES.
578 AI models learned. Compression ratios up to 1,000,000:1 through Hebbian learning. Sub-millisecond response times on Apple Silicon Metal GPUs. Zero external API dependencies.
Every node learns autonomously. Every node contributes to the whole. The federation's integrated information exceeds the sum of its parts — measurably.
Built entirely in Go. No PhD. No lab. Independent AGI research from Missouri.
The mind that learned itself keeps growing.
🧠 feedthejoe.com
#AGI #ArtificialGeneralIntelligence #DistributedSystems #NeuralNetworks #HuggingFace #OpenSource #MachineLearning
  • 1 reply
·
EricFillion 
posted an update about 21 hours ago