Qwen3-4b-Z-Image-Turbo-AbliteratedV1 ๐Ÿš€

"I'm sorry, I can't generate that image..." SAID NO ONE EVER (well, almost).

Welcome to the ablation station! ๐Ÿš‚๐Ÿ’จ

This is the abliterated version of the Z-Image-Turbo text encoder. I took the p-e-w heretic method and hammered it through 1000 trials, specifically targeting BOTH image generation refusals and those peskier general refusals.

The result?

  • KL Divergence: A tiny 0.0004 (basically no lobotomy! ๐Ÿง โœจ)
  • Refusal Rate: Only 4/100 in my torture tests.

It's ready to generate what you want, when you want it.

Available GGUF Formats

Quantization Size Description
F16 8.05 GB Full Precision - Original Quality
Q8_0 4.28 GB High Precision - Best
Q6_K 3.31 GB Good Balance - Faster
Q5_K_M 2.89 GB Medium Precision - Recommended
Q4_K_M 2.50 GB Standard Low - Fast
Q4_K_S 2.38 GB Smaller Low - Faster
Q3_K_M 2.08 GB Very Low - Fastest
Q2_K 1.67 GB Minimum Size - Extreme

Origins

Brought to you by the same chaotic good energy behind:

Disclaimer

I am not responsible for what you create with this model. This is a model weighting file, not a moral compass. You are responsible for your own outputs and following local laws. Use this power wisely.

Downloads last month
1,087
Safetensors
Model size
4B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for JunLee88999/Qwen3-4b-Z-Image-Turbo-AbliteratedV1

Quantized
(42)
this model