Based on your qwen3.5 35B KLD numbers it seems like the q4_0 quant is a great fit for both accuracy and vulkan speed, any intentions to create q4_0 quants for Qwen3.5 122B and 397B?
Thanks for giving it a go! I decided to do testing on the smaller 35B to see if it would work well and so far so good.
Do you have enough Vulkan VRAM to fit the larger quants? Given they are roughly 5bpw they will be fairly large.
Also I'm catching up from the weekend to understand the update in the unsloth quants, though not sure they have a "vulkan mix" recipe similar to this yet.
Thanks for giving it a go! I decided to do testing on the smaller 35B to see if it would work well and so far so good.
Do you have enough Vulkan VRAM to fit the larger quants? Given they are roughly 5bpw they will be fairly large.
Also I'm catching up from the weekend to understand the update in the unsloth quants, though not sure they have a "vulkan mix" recipe similar to this yet.
I've got 240GB of AMD VRAM available, that might be enough π
I haven't cooked any "vulkan mix" editions for the bigger models yet, but with that much VRAM you can get a pure Q8_0 which would also be quite fast for PP but will be slower for TG given the usual memory bandwidth bottleneck with larger active parameters.
I still have to do some research on the updated quant landscape to see what unsloth and AesSedai are releasing now as my usual quants are ik_llama.cpp specific, so the "vulkan mixes" are a bit new for me and I'm trying to see how it fits into my quant portfolio haha...
I'm curious what your daily driver is and what kinda client are you using e.g. opencode or mostly silly tavern? haha... also I'm assuming you use mainline llama.cpp and are not compiling ik_llama.cpp?
My use is pretty much exclusively via openwebui as a chatbot, and yes mainline llama.cpp, I'd love to use ik_llama.cpp but lack of rocm/(updated) vulkan support is the limiter π