When to Ensemble: Identifying Token-Level Points for Stable and Fast LLM Ensembling Paper • 2510.15346 • Published Oct 17, 2025 • 34
ParallelBench: Understanding the Trade-offs of Parallel Decoding in Diffusion LLMs Paper • 2510.04767 • Published Oct 6, 2025 • 28
XQuant: Breaking the Memory Wall for LLM Inference with KV Cache Rematerialization Paper • 2508.10395 • Published Aug 14, 2025 • 42
furiosa-ai-dev/EXAONE-3.0-7.8B-Instruct-converted Text Generation • 8B • Updated Nov 14, 2024 • 16 • 1
furiosa-ai-dev/EXAONE-3.0-7.8B-Instruct-converted Text Generation • 8B • Updated Nov 14, 2024 • 16 • 1