On-the-fly Repulsion in the Contextual Space for Rich Diversity in Diffusion Transformers
Abstract
Diffusion transformers can generate diverse visual outputs by applying repulsion in contextual space during the forward pass, maintaining visual quality and semantic accuracy while operating efficiently in streamlined models.
Modern Text-to-Image (T2I) diffusion models have achieved remarkable semantic alignment, yet they often suffer from a significant lack of variety, converging on a narrow set of visual solutions for any given prompt. This typicality bias presents a challenge for creative applications that require a wide range of generative outcomes. We identify a fundamental trade-off in current approaches to diversity: modifying model inputs requires costly optimization to incorporate feedback from the generative path. In contrast, acting on spatially-committed intermediate latents tends to disrupt the forming visual structure, leading to artifacts. In this work, we propose to apply repulsion in the Contextual Space as a novel framework for achieving rich diversity in Diffusion Transformers. By intervening in the multimodal attention channels, we apply on-the-fly repulsion during the transformer's forward pass, injecting the intervention between blocks where text conditioning is enriched with emergent image structure. This allows for redirecting the guidance trajectory after it is structurally informed but before the composition is fixed. Our results demonstrate that repulsion in the Contextual Space produces significantly richer diversity without sacrificing visual fidelity or semantic adherence. Furthermore, our method is uniquely efficient, imposing a small computational overhead while remaining effective even in modern "Turbo" and distilled models where traditional trajectory-based interventions typically fail.
Community
Conditionally accepted to SIGGRAPH 2026
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- EVLF: Early Vision-Language Fusion for Generative Dataset Distillation (2026)
- ReDiStory: Region-Disentangled Diffusion for Consistent Visual Story Generation (2026)
- ELROND: Exploring and decomposing intrinsic capabilities of diffusion models (2026)
- GASS: Geometry-Aware Spherical Sampling for Disentangled Diversity Enhancement in Text-to-Image Generation (2026)
- PokeFusion Attention: A Lightweight Cross-Attention Mechanism for Style-Conditioned Image Generation (2026)
- Token Pruning for In-Context Generation in Diffusion Transformers (2026)
- Language-Free Generative Editing from One Visual Example (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2603.28762 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper