This is a Ministral 3 14B model where the embedding layer and output (head) layer are quantized to 6-bit precision, while the rest of the model uses 4-bit quantization. This mixed-precision approach aims to balance model size and inference speed with improved precision in critical layers.

Downloads last month
230
Safetensors
Model size
14B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dgomes03/Ministral-3-14B-Instruct-2512-mixed-6-4-bit