Qwen2.5-7B-Instruct Q4_K_M GGUF
Quantized from Junn17/qwen using Unsloth.
Usage
llama-cli --model qwen_model.Q4_K_M.gguf -p "Hello!"
- Downloads last month
- -
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support