生成稠密化:学习稠密化高斯分布以实现高保真通用的三维重建
Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction
December 9, 2024
作者: Seungtae Nam, Xiangyu Sun, Gyeongjin Kang, Younggeun Lee, Seungjun Oh, Eunbyung Park
cs.AI
摘要
通过利用来自大型多视角数据集的先验知识,广义前馈高斯模型在稀疏视图3D重建方面取得了显著进展。然而,由于高斯数量有限,这些模型通常难以表示高频细节。尽管在每个场景的3D高斯点阵(3D-GS)优化中使用的致密化策略可以应用于前馈模型,但可能不太适用于广义场景。在本文中,我们提出了生成致密化,这是一种高效且通用的方法,用于致密化由前馈模型生成的高斯。与3D-GS致密化策略不同,后者通过迭代地分裂和克隆原始高斯参数,我们的方法通过在单个前向传递中上采样来自前馈模型的特征表示,并生成它们对应的精细高斯,利用嵌入的先验知识来增强泛化能力。在对象级和场景级重建任务上的实验结果表明,我们的方法胜过具有相似或更小模型尺寸的最先进方法,在表示细节方面取得显著改进。
English
Generalized feed-forward Gaussian models have achieved significant progress
in sparse-view 3D reconstruction by leveraging prior knowledge from large
multi-view datasets. However, these models often struggle to represent
high-frequency details due to the limited number of Gaussians. While the
densification strategy used in per-scene 3D Gaussian splatting (3D-GS)
optimization can be adapted to the feed-forward models, it may not be ideally
suited for generalized scenarios. In this paper, we propose Generative
Densification, an efficient and generalizable method to densify Gaussians
generated by feed-forward models. Unlike the 3D-GS densification strategy,
which iteratively splits and clones raw Gaussian parameters, our method
up-samples feature representations from the feed-forward models and generates
their corresponding fine Gaussians in a single forward pass, leveraging the
embedded prior knowledge for enhanced generalization. Experimental results on
both object-level and scene-level reconstruction tasks demonstrate that our
method outperforms state-of-the-art approaches with comparable or smaller model
sizes, achieving notable improvements in representing fine details.Summary
AI-Generated Summary