Coding Models Qwen/Qwen3-Coder-480B-A35B-Instruct Text Generation • 480B • Updated Aug 21, 2025 • 73.4k • • 1.31k
Models OpenGVLab/InternVL3-78B-AWQ Image-Text-to-Text • Updated Sep 11, 2025 • 3.04k • 10 OpenGVLab/InternVL3-78B Image-Text-to-Text • Updated Sep 11, 2025 • 39.1k • 232 google-t5/t5-base Translation • Updated Feb 14, 2024 • 2.23M • • 769 HuggingFaceH4/zephyr-7b-alpha Text Generation • Updated Oct 16, 2024 • 4.36k • • 1.12k
Vision Models genmo/mochi-1-preview Text-to-Video • Updated Sep 4, 2025 • 8.74k • • 1.31k stabilityai/stable-diffusion-3.5-large Text-to-Image • Updated Oct 22, 2024 • 60k • • 3.4k stabilityai/stable-diffusion-3.5-medium Text-to-Image • Updated Oct 31, 2024 • 179k • • 910
Must Reads ZeroSearch: Incentivize the Search Capability of LLMs without Searching Paper • 2505.04588 • Published May 7, 2025 • 65
ZeroSearch: Incentivize the Search Capability of LLMs without Searching Paper • 2505.04588 • Published May 7, 2025 • 65
Coding Models Qwen/Qwen3-Coder-480B-A35B-Instruct Text Generation • 480B • Updated Aug 21, 2025 • 73.4k • • 1.31k
Vision Models genmo/mochi-1-preview Text-to-Video • Updated Sep 4, 2025 • 8.74k • • 1.31k stabilityai/stable-diffusion-3.5-large Text-to-Image • Updated Oct 22, 2024 • 60k • • 3.4k stabilityai/stable-diffusion-3.5-medium Text-to-Image • Updated Oct 31, 2024 • 179k • • 910
Models OpenGVLab/InternVL3-78B-AWQ Image-Text-to-Text • Updated Sep 11, 2025 • 3.04k • 10 OpenGVLab/InternVL3-78B Image-Text-to-Text • Updated Sep 11, 2025 • 39.1k • 232 google-t5/t5-base Translation • Updated Feb 14, 2024 • 2.23M • • 769 HuggingFaceH4/zephyr-7b-alpha Text Generation • Updated Oct 16, 2024 • 4.36k • • 1.12k
Must Reads ZeroSearch: Incentivize the Search Capability of LLMs without Searching Paper • 2505.04588 • Published May 7, 2025 • 65
ZeroSearch: Incentivize the Search Capability of LLMs without Searching Paper • 2505.04588 • Published May 7, 2025 • 65