Papers
arxiv:2603.21304

F4Splat: Feed-Forward Predictive Densification for Feed-Forward 3D Gaussian Splatting

Published on Mar 22
· Submitted by
Minseong Bae
on Mar 24
Authors:
,
,

Abstract

F4Splat introduces a predictive densification approach for 3D Gaussian splatting that adaptively allocates Gaussians based on spatial complexity and view overlap, reducing redundancy while maintaining reconstruction quality.

AI-generated summary

Feed-forward 3D Gaussian Splatting methods enable single-pass reconstruction and real-time rendering. However, they typically adopt rigid pixel-to-Gaussian or voxel-to-Gaussian pipelines that uniformly allocate Gaussians, leading to redundant Gaussians across views. Moreover, they lack an effective mechanism to control the total number of Gaussians while maintaining reconstruction fidelity. To address these limitations, we present F4Splat, which performs Feed-Forward predictive densification for Feed-Forward 3D Gaussian Splatting, introducing a densification-score-guided allocation strategy that adaptively distributes Gaussians according to spatial complexity and multi-view overlap. Our model predicts per-region densification scores to estimate the required Gaussian density and allows explicit control over the final Gaussian budget without retraining. This spatially adaptive allocation reduces redundancy in simple regions and minimizes duplicate Gaussians across overlapping views, producing compact yet high-quality 3D representations. Extensive experiments demonstrate that our model achieves superior novel-view synthesis performance compared to prior uncalibrated feed-forward methods, while using significantly fewer Gaussians.

Community

Paper author Paper submitter

Addressing a key limitation of feed-forward 3D Gaussian Splatting: flexible, non-uniform Gaussian allocation under a user-specified budget.

𝗧𝗟;𝗗𝗥: We present F⁴Splat, a Gaussian-count controllable feed-forward 3DGS framework. Instead of uniformly allocating Gaussians, F⁴Splat predicts densification scores to allocate more primitives where they are most needed, enabling compact yet high-fidelity 3D representations and achieving competitive or superior novel-view synthesis with significantly fewer Gaussians than prior uncalibrated feed-forward methods.

Amazing Work!!

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.21304 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.21304 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.21304 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.