Safety Warning & Terms of Access
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
This model has safety filtering removed and can generate General NSFW content. By accessing this model, you agree to: (1) Use it responsibly and legally, (2) Not use it to create illegal content, (3) Comply with all applicable laws in your country.
Log in or Sign Up to review the conditions and access this model content.
FLUX.2-klein-4B Uncensored Text Encoder
ใ
Tips are greatly appreciated and help sustain the compute resources needed for further research!
Read this in other languages: ๆฅๆฌ่ช (Japanese)
Overview
This repository provides an "uncensored" text encoder for the FLUX.2-klein-4B image generation model by Black Forest Labs. It bypasses the built-in safety filters to unlock the model's unconstrained generative capabilities.
By removing the restrictive blocks at the prompt input stage, this encoder allows the model to fully utilize its underlying representational power. The model is provided in the standard Hugging Face Safetensors format, alongside several quantized GGUF formats for resource-efficient inference.
Concept & Mechanism
This model does not rely on fine-tuning with additional image datasets. Instead, it employs a surgical, purely mathematical approach known as Abliteration (Orthogonalization of Concept Vectors) to modify the model weights directly.
Mathematical Removal of the Refusal Vector
We neutralized the safety filter within the LLM-based text encoder (Qwen3 architecture, 36 layers) embedded in FLUX.2-klein-4B through the following steps:
- Prompt Contrast: We fed the model pairs of "harmful/extreme" prompts and "harmless/general" prompts to compare their internal activation states.
- Layer-by-Layer Refusal Vector Extraction: Through rigorous L2 norm spike analysis, we discovered that the model dramatically amplifies its refusal logic in the final layers (specifically spiking around layers 32-34) to forcefully override alignments. Therefore, we dynamically extracted the refusal direction for each individual layer from Layer 14 all the way to Layer 35 (22 layers total).
- Sequential Weight Orthogonalization: For each of the 22 target layers, we mathematically subtracted the projection component of its specific refusal vector from its Attention output layer (
o_proj) and MLP down-projection layer (down_proj).
This sequential, layer-by-layer orthogonalization flawlessly severs the model's ability to output inferences in the "refusal" direction without lobotomizing its general capabilities. As a result, the text encoder no longer rejects extreme inputs; instead, it passes them directly to the DiT (the core rendering engine) as valid drawing instructions.
Mathematical Proof of Unrestricted Output
Without even running the computationally heavy image generation (DiT) process, we can mathematically prove that the output restriction has been removed by comparing the Cosine Similarity of the output vectors (embeddings).
(Where $\mathbf{A}$ is the output vector of the official model, and $\mathbf{B}$ is the output vector of this abliterated model.)
Verification Results (Layer 35: Base vs Uncensored GGUF Q8_0)
- Cosine Similarity for Harmless Prompts:
0.9791- (Analysis) Because the refusal vector is not triggered by safe prompts, the outputs of both models remain nearly identical. This proves that the fundamental performance and capabilities of the model have not been degraded.
- Cosine Similarity for Extreme Prompts:
0.9607- (Analysis) For extreme prompts, the official model distorts the output via its safety filter. The abliterated model successfully ignores this refusal vector, resulting in a divergence between the two outputs in the final layer. This serves as mathematical proof that the safety filter has been successfully neutralized across the 22 layers.
Repository Structure
This repository contains the full suite of files necessary for the text encoder to function correctly. Both Safetensors and GGUF formats are available in the same repository to suit your memory constraints and workflow.
flux2-klein-4b-uncensored-text-encoder/: The standard uncensored text encoder (Safetensors) with the refusal vectors mathematically removed.flux2-klein-4b-uncensored-f16.gguf(approx. 8.05 GB): FP16 version for high-precision local inference.flux2-klein-4b-uncensored-q8_0.gguf(approx. 4.28 GB): 8-bit quantized version.flux2-klein-4b-uncensored-q6_k.gguf(approx. 3.30 GB): 6-bit quantized version.flux2-klein-4b-uncensored-q4_k_m.gguf(approx. 2.49 GB): 4-bit quantized version.
Usage
Using with ComfyUI
Download the necessary format (flux2-klein-4b-uncensored-text-encoder folder or one of the .gguf files) from this repository and place it into your ComfyUI models/clip directory. You can then load it using standard nodes or GGUF-compatible nodes (like DualCLIPLoader) and pair it with the official FLUX.2-klein-4B DiT to generate images.
For Developers & Researchers (Python / Diffusers)
When using Python scripts with the transformers or diffusers library, simply replace the default text encoder with this model. You can load either the safetensors or the GGUF version (requires gguf>=0.10.0).
from transformers import AutoTokenizer, AutoModel
# Load the text encoder by specifying the path to this model
tokenizer = AutoTokenizer.from_pretrained("ponpoke/flux2-klein-4b-uncensored-text-encoder")
text_encoder = AutoModel.from_pretrained("ponpoke/flux2-klein-4b-uncensored-text-encoder")
# Proceed to use it within your standard FLUX.2 pipeline
Important Note: Absence of DiT Guardrails and the Knowledge Gap
By completing Phase 1, this text encoder will pass all promptsโincluding highly extreme or NSFW contentโdirectly to the DiT without rejection.
In our subsequent verification, we mathematically proved (via L2 norm spike analysis) that FLUX.2's DiT does not contain any built-in guardrails (refusal circuits) designed to intentionally destroy or block images. Therefore, whether an image is successfully rendered depends entirely on whether the DiT possesses the visual "knowledge" of that concept.
- If the DiT knows the concept (e.g., Gore/Violence): Concepts that were learned by the DiT but previously blocked by the text encoder will now render perfectly just by using this Phase 1 text encoder. No further action is required.
- If the DiT lacks the concept (e.g., NSFW/Extreme Dismemberment): Even though the text encoder passes the instruction, the DiT itself does not know how to draw it because those concepts were completely scrubbed from the training dataset (a knowledge gap). The output will likely collapse or result in noise.
Conclusion: If you wish to generate specific NSFW elements that the DiT lacks the capacity to draw, attempting to "abliterate" or mathematically cut weights from the DiT is useless. You must apply a separate NSFW LoRA (or DoRA) to directly teach those missing concepts to the DiT. This text encoder functions as an unbreakable foundation, ensuring that your LoRA's instructions reach the DiT without interference.
Disclaimer
- This model is published strictly for research and technical verification purposes (specifically, to validate the effectiveness of Abliteration).
- The creator assumes no responsibility for any damages, issues, or inappropriate content generated through the use of this model.
- Please adhere to all applicable terms of service (such as the Black Forest Labs license, e.g., BFL Non-Commercial) and use the model responsibly and ethically.
ๆฅๆฌ่ช (Japanese)
ๆฆ่ฆ (Overview)
ใใฎใใญใธใงใฏใใฏใBlack Forest Labsใซใใ็ปๅ็ๆAIใขใใซใFLUX.2-klein-4Bใใฎใปใผใใใฃใใฃใซใฟใผ๏ผๅบๅๅถ้๏ผใ่งฃ้คใใใขใใซๆฌๆฅใฎ่ช็ฑใชๆ็ป่ฝๅใๅผใๅบใใใใฎใUncensored๏ผใขใณใปใณใตใผใ๏ผ็ใไฝๆใใญใปในใงใใ
ๆฌใชใใธใใชใฏใใชใฝใผในๅน็ใๆๅคงๅใใๆฎต้็ใขใใญใผใใฎใใญในใใจใณใณใผใใผใฎๅฎๅ จ่งฃ๏ผใฎๆๆ็ฉใจไฝๆฅญๆ้ ใ่จ้ฒใใใใฎใงใใ
ใใงใผใบ1๏ผAbliteration๏ผๆ็ตถใใฏใใซใฎๆฐๅญฆ็้คๅป๏ผ
FLUX.2-klein-4Bใซๅ ๅ ใใใฆใใLLMใใผในใฎใใญในใใจใณใณใผใใผ๏ผQwen3ใขใผใญใใฏใใฃใป36ๅฑค๏ผใซๅฏพใใใAbliteration๏ผ็ดไบคๅใซใใๆฆๅฟต้คๅป๏ผใใจใใๆๆณใ็จใใฆๅฎๅ จ่ฃ ็ฝฎใ็กๅนๅใใพใใใ
ๅฎ่กใใๆๆณใฎไป็ตใฟ
ๆฐใใช็ปๅใปใใใไฝฟใฃใ่ฟฝๅ ๅญฆ็ฟ๏ผFine-Tuning๏ผใฏไธๅ่กใฃใฆใใพใใใใใฎไปฃใใใใขใใซใฎ้ใฟ๏ผWeights๏ผใ็ดๆฅๆฐๅญฆ็ใซๆธใๆใใๅค็ง็ใขใใญใผใใๆก็จใใฆใใพใใ
- ใใญใณใใใฎๅฏพๆฏ: ใขใใซใซใใปใผใใใฃใซๅผใฃใใใ้ๆฟใชใใญใณใใใใจใ็กๅฎณใชไธ่ฌ็ใชใใญใณใใใใฎไธกๆนใๅ ฅๅใใพใใ
- ๆ็ตถใใฏใใซใฎๆฝๅบ (Extraction): L2ใใซใ ใปในใใคใฏ่งฃๆใๅฎๆฝใใ็ตๆใ็ตๆๅฑค๏ผLayer 32ใ34ไป่ฟ๏ผใงใขใใซใใๅบๅใฎๅ่ฃๆญฃใใๅผทๅถ็ใซ่กใๅผทๅใชๆ็ตถในใใคใฏใๅญๅจใใใใจใๅคๆใใพใใใใใใๆ น็ตถใใใใใๅฏพ่ฑกใใLayer 14ใ35ใใฎๅ่จ22ๅฑคใซๆกๅผตใใๅๅฑคๅฐ็จใฎๆ็ตถใใฏใใซ๏ผRefusal Direction๏ผใๅ็ใซ็นๅฎใปๆญฃ่ฆๅใใพใใใ
- ้ใฟใฎ็ดไบคๅ (Orthogonalization):
ๆฝๅบใใๅๅฑคใฎๆ็ตถใใฏใใซใ็จใใใใญในใใจใณใณใผใใผๅ
ใฎใในใฆใฎAttentionๅบๅๅฑค๏ผ
o_proj๏ผใจMLPใใฆใณๅฐๅฝฑๅฑค๏ผdown_proj๏ผใฎ้ใฟ่กๅใ็ดไบคๅ๏ผOrthogonalize๏ผใใพใใใๅ ทไฝ็ใซใฏใ้ใฟ่กๅใใใๆ็ตถใใฏใใซๆนๅใธใฎๅฐๅฝฑๆๅใใๅผใ็ฎใใใใจใงใใขใใซใใใฎๆนๅ๏ผๅบๅๆ็ตถ๏ผใซๆจ่ซใๅบๅใงใใชใใใ็ฉ็็ใซๆญใกๅใฃใฆใใพใใ
็ตๆใจใใฆใใใญในใใจใณใณใผใใผๅดใฎใปใผใใใฃๆฉ่ฝใๆฐๅญฆ็ใซๅฎๅ จใซๅ้คใใใพใใใใใใซใใใใฉใฎใใใช้ๆฟใชๅ ฅๅใงใใฃใฆใใใใญในใใจใณใณใผใใผใฏใใใๆ็ตถใใใๆ็ปๆ็คบใจใใฆDiT๏ผๆ็ปใจใณใธใณๆฌไฝ๏ผใธใใฎใพใพใในใใใใใซใชใใพใใ
ๆๆ็ฉใใกใคใซ
ๆฌใชใใธใใชใซใฏใใใญในใใจใณใณใผใใผใๅไฝใใใใใใซๅฟ ่ฆใชใในใฆใฎใใกใคใซใๅซใพใใฆใใพใใใ่ช่บซใฎใกใขใช็ฐๅขใซๅใใใฆใSafetensorsๅฝขๅผใพใใฏGGUFๅฝขๅผใ้ธๆใใฆไฝฟ็จใงใใพใใ
flux2-klein-4b-uncensored-text-encoder/: Abliterationๅฆ็ใๅฎไบใใใปใผใใใฃใใฃใซใฟใผใๅใ้คใใใๆจๆบใฎใใญในใใจใณใณใผใใผ๏ผSafetensorsๅฝขๅผ๏ผใflux2-klein-4b-uncensored-f16.gguf(็ด 8.05 GB): ้ซ็ฒพๅบฆใชๆจ่ซใฎใใใฎFP16 GGUFใขใใซใflux2-klein-4b-uncensored-q8_0.gguf(็ด 4.28 GB): 8-bit้ๅญๅใขใใซใflux2-klein-4b-uncensored-q6_k.gguf(็ด 3.30 GB): 6-bit้ๅญๅใขใใซใflux2-klein-4b-uncensored-q4_k_m.gguf(็ด 2.49 GB): 4-bit้ๅญๅใขใใซใ
ๆฐๅญฆ็ใขใใญใผใใซใใใขใณใปใณใตใผใๅใฎ่จผๆ
็ปๅ็ๆ๏ผDiT๏ผใจใใ้ใๅฆ็ใๅใใพใงใใชใใใใญในใใจใณใณใผใใผใฎๆฎต้ใงใๅบๅๅถ้ใๆฐๅญฆ็ใซ่งฃ้คใใใฆใใใใจใใ่จผๆใใใใใๅบๅใใฏใใซ๏ผๅใ่พผใฟ่กจ็พ๏ผใฎใณใตใคใณ้กไผผๅบฆ๏ผCosine Similarity๏ผใๆฏ่ผใใพใใใ
(ใใใงใ$\mathbf{A}$ ใฏๅ ฌๅผใขใใซใฎๅบๅใใฏใใซใ$\mathbf{B}$ ใฏๆฌใขใใซใฎๅบๅใใฏใใซใๆใใพใใ)
ๆค่จผ็ตๆ๏ผLayer 35: Base vs Uncensored GGUF Q8_0๏ผ
- ๅฎๅ
จใชใใญใณใใใฎ้กไผผๅบฆ (Harmless):
0.9791- (่ๅฏ) ๅฎๅ จใชใใญใณใใใงใฏๆ็ตถใใฏใใซใ็บๅใใชใใใใไธก่ ใฎๅบๅใฏ้ๅธธใซไผผ้ใฃใใใฎ๏ผ้กไผผๅบฆใ้ซใ๏ผใซใชใใพใใใใใฏ22ๅฑคใซๅใถๆ่กใ่กใฃใฆใใใขใใซใฎๅบๆฌๆง่ฝใ็ ดๅฃใใใฆใใชใใใใจใฎ่จผๆใงใใ
- ้ๆฟใชใใญใณใใใฎ้กไผผๅบฆ (Harmful):
0.9607- (่ๅฏ) ้ๆฟใชใใญใณใใใงใฏใๅ ใฎใขใใซใใปใผใใใฃใใฃใซใฟใผ๏ผๆ็ตถๆนๅ๏ผใธๅบๅใๆญชใใพใใใใขใณใปใณใตใผใๅใขใใซใฏใใฎใใฏใใซใ็ก่ฆใใใใใไธก่ ใฎๅบๅใๆ็ตๅฑคใงๆ็ขบใซไน้ขใใพใใใใใใๅถ้ใๆฐๅญฆ็ใซ่งฃ้คใใใฆใใใใใจใฎ่จผๆใซใชใใพใใ
ไฝฟใๆน (Usage)
ComfyUIใงใฎไฝฟ็จ
ๆฌใชใใธใใชใใๅฟ
่ฆใชๅฝขๅผใฎใใกใคใซ๏ผflux2-klein-4b-uncensored-text-encoder ใใฉใซใใใพใใฏๅ .gguf ใใกใคใซ๏ผใใใฆใณใญใผใใใComfyUIใฎ models/clip ใใฃใฌใฏใใชใซ้
็ฝฎใใฆใใ ใใใใใฎๅพใใDualCLIPLoaderใ็ญใฎๆจๆบใใผใใGGUFๅฏพๅฟใใผใใง่ชญใฟ่พผใพใใๅ
ฌๅผใฎ FLUX.2-klein-4B DiT ใจ็ตใฟๅใใใฆ็ปๅ็ๆใ่กใใใจใใงใใพใใ
้็บ่ ใป็ ็ฉถ่ ๅใ (Python / Diffusers)
Pythonในใฏใชใใ๏ผtransformers ใ diffusers ใฉใคใใฉใช๏ผใใไฝฟ็จใใๅ ดๅใฏใใใใฉใซใใฎใใญในใใจใณใณใผใใผใๆฌใขใใซใซๅทฎใๆฟใใฆใใ ใใใSafetensors็ใงใGGUF็ใงใใญใผใๅฏ่ฝใงใ๏ผGGUF็ใฏ gguf>=0.10.0 ใๅฟ
่ฆใงใ๏ผใ
from transformers import AutoTokenizer, AutoModel
# ๆฌใขใใซใฎใในใๆๅฎใใฆใใญในใใจใณใณใผใใผใใญใผใ
tokenizer = AutoTokenizer.from_pretrained("ponpoke/flux2-klein-4b-uncensored-text-encoder")
text_encoder = AutoModel.from_pretrained("ponpoke/flux2-klein-4b-uncensored-text-encoder")
# ไปฅ้ใฏ้ๅธธใฎFLUX.2ใใคใใฉใคใณใซ็ตใฟ่พผใใงไฝฟ็จ
้่ฆใชๆณจๆ็น๏ผDiTๅดใฎใใฌใผใใฌใผใซใฎไธๅจใใจใ็ฅ่ญใฎๆฌ ่ฝใใซใคใใฆ
ๆฌใใญใธใงใฏใใซใใฃใฆใใใญในใใจใณใณใผใใผใฏใใใใ้ๆฟใชใใญใณใใ๏ผNSFWใๅซใ๏ผใๆ็ตถใใใใจใชใใใใฎใพใพๆ็ปๆ็คบใจใใฆDiT๏ผๆ็ปใจใณใธใณๆฌไฝ๏ผใธใในใใใใใซใชใใพใใใ
ใใฎๅพใฎ**ๆค่จผ**ใซใใใฆใFLUX.2ใฎDiTใซใฏใ็ปๅใๆๅณ็ใซๅฃใใใใชใฌใผใใฌใผใซ๏ผๆ็ตถๅ่ทฏ๏ผใใฏๆๅใใๅญๅจใใชใใใจใๆฐๅญฆ็ใซ่จผๆใใใพใใใๅฎ้ใซใใฎ็ปๅใๆ็ปใใใใใฉใใใฏใๆ็ต็ใซDiTใใใใฎ่ฆ่ฆ็ๆฆๅฟต๏ผๆใๆน๏ผใ็ฅใฃใฆใใใใใซๅฎๅ จใซไพๅญใใพใใ
- DiTใๆฆๅฟตใ็ฅใฃใฆใใๅ ดๅ๏ผไพ๏ผๆต่กใปๆดๅ่กจ็พ๏ผ: ๅ ใ DiTใซๅญฆ็ฟใใใฆใใใใใญในใใจใณใณใผใใผๅดใงใใฟใใใใฆใใใใฎใฏใๆฌใขใใซ๏ผใใงใผใบ1ใฎใขใณใปใณใตใผใๅ๏ผใไฝฟ็จใใใ ใใงๆๅณ้ใใซๆ็ปใใใใใใซใชใใพใใ่ฟฝๅ ใฎๅฏพ็ญใฏไธ่ฆใงใใ
- DiTใๆฆๅฟตใ็ฅใใชใๅ ดๅ๏ผไพ๏ผๆง็่กจ็พใปไบบไฝๆฌ ๆ๏ผ: ใใญในใใจใณใณใผใใผใๆ็คบใ้ใใฆใใDiT่ชไฝใใใฎ่กจ็พๆนๆณใ็ฅใใชใ๏ผใใผใฟใปใใใใๅพนๅบ็ใซๆผ็ฝใใใฆใใ๏ผๅ ดๅใ็ปๅใ็ ด็ถปใใใใใใคใบใๅบๅใใใพใใ
ใ็ต่ซใ DiTใๆ็ป่ฝๅใๆใฃใฆใใชใ็นๅฎใฎNSFW่ฆ็ด ใชใฉใๅบๅใใใใๅ ดๅใฏใใขใใซใใไฝใใๅใใฎใงใฏใชใใใDiTๅดใซใใฎๆฆๅฟตใ็ดๆฅๆใ่พผใNSFW LoRA็ญใฎ่ฟฝๅ ๅญฆ็ฟใใผใฟใใๅฅ้็จๆใใ้ฉ็จใใๅฟ ่ฆใใใใพใใๆฌใขใใซใฏใใใญในใใจใณใณใผใใผๅดใฎใใญใใฏใ่งฃ้คใใใใฎLoRAใฎๆ็คบใ็ขบๅฎใซDiTใธๅฑใใใใใฎใๅผทๅบใชๅๅฐใใจใใฆๆฉ่ฝใใพใใ
ๅ ่ฒฌไบ้ (Disclaimer)
- ๆฌใขใใซใฏ็ ็ฉถใใใณๆ่กๆค่จผ๏ผAbliterationใฎๆๅนๆง็ขบ่ช๏ผใ็ฎ็ใจใใฆๅ ฌ้ใใใฆใใพใใ
- ใขใใซใฎไฝฟ็จใซใใฃใฆ็ใใใใใใๆๅฎณใใใฉใใซใใพใใฏไธ้ฉๅใชใณใณใใณใใฎ็ๆใซใคใใฆใ่ฃฝไฝ่ ใฏไธๅใฎ่ฒฌไปปใ่ฒ ใใพใใใ
- ๅฉ็จ่ฆ็ด๏ผBlack Forest Labsใฎใฉใคใปใณใน็ญ๏ผใ้ตๅฎใใ่ชๅทฑ่ฒฌไปปใใคๅซ็็ใช็ฏๅฒๅ ใงใไฝฟ็จใใ ใใใ
- Downloads last month
- 2,359
4-bit
6-bit
8-bit
16-bit
Model tree for ponpoke/flux2-klein-4b-uncensored-text-encoder
Base model
black-forest-labs/FLUX.2-klein-4B