These models are quantized from Qwen/Qwen3-4B-Thinking-2507 with ggml-org/gguf-my-repo and converted to bf16, f16 with llama.cpp with commit hash 25ff6f7659f6a5c47d6a73eada5813f0495331f0.

Downloads last month
7
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including skymizer/Qwen3-4B-Thinking-2507-GGUF