2D generation
updated
Paper
• 2312.02149
• Published
• 8
Generative Rendering: Controllable 4D-Guided Video Generation with 2D
Diffusion Models
Paper
• 2312.01409
• Published
• 10
SANeRF-HQ: Segment Anything for NeRF in High Quality
Paper
• 2312.01531
• Published
• 7
Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models
Paper
• 2312.04410
• Published
• 15
GenTron: Delving Deep into Diffusion Transformers for Image and Video
Generation
Paper
• 2312.04557
• Published
• 13
MVDD: Multi-View Depth Diffusion Models
Paper
• 2312.04875
• Published
• 10
ECLIPSE: A Resource-Efficient Text-to-Image Prior for Image Generations
Paper
• 2312.04655
• Published
• 21
FreeControl: Training-Free Spatial Control of Any Text-to-Image
Diffusion Model with Any Condition
Paper
• 2312.07536
• Published
• 18
Faster Diffusion: Rethinking the Role of UNet Encoder in Diffusion
Models
Paper
• 2312.09608
• Published
• 16
SCEdit: Efficient and Controllable Image Diffusion Generation via Skip
Connection Editing
Paper
• 2312.11392
• Published
• 20
Rich Human Feedback for Text-to-Image Generation
Paper
• 2312.10240
• Published
• 20
Towards Accurate Guided Diffusion Sampling through Symplectic Adjoint
Method
Paper
• 2312.12030
• Published
• 6
Learning Continuous 3D Words for Text-to-Image Generation
Paper
• 2402.08654
• Published
• 12
Cross-Attention Makes Inference Cumbersome in Text-to-Image Diffusion
Models
Paper
• 2404.02747
• Published
• 13
Invisible Stitch: Generating Smooth 3D Scenes with Depth Inpainting
Paper
• 2404.19758
• Published
• 12
Quant VideoGen: Auto-Regressive Long Video Generation via 2-Bit KV-Cache Quantization
Paper
• 2602.02958
• Published
• 33