Sparse-BitNet: 1.58-bit LLMs are Naturally Friendly to Semi-Structured Sparsity Paper • 2603.05168 • Published 25 days ago • 4
view article Article Fine-tuning LLMs to 1.58bit: extreme quantization made easy +4 Sep 18, 2024 • 278
Think Only When You Need with Large Hybrid-Reasoning Models Paper • 2505.14631 • Published May 20, 2025 • 20