生成稠密化:學習稠密化高斯函數以進行高保真度通用化3D重建

Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction

December 9, 2024
作者: Seungtae Nam, Xiangyu Sun, Gyeongjin Kang, Younggeun Lee, Seungjun Oh, Eunbyung Park
cs.AI

摘要

通用的前饋高斯模型通過利用大型多視角數據集中的先前知識,在稀疏視圖3D重建方面取得了顯著進展。然而,由於高斯數量有限,這些模型通常難以表示高頻細節。儘管在每個場景的3D高斯飛濺(3D-GS)優化中使用的密集化策略可以適應前饋模型,但可能不太適用於通用情況。在本文中,我們提出了生成密集化,這是一種高效且通用的方法,用於使前饋模型生成的高斯密集化。與3D-GS密集化策略不同,該策略通過迭代地分割和克隆原始高斯參數,我們的方法通過在單個前向傳遞中上採樣來自前饋模型的特徵表示,並生成相應的細高斯,利用嵌入的先前知識以增強泛化性。對象級和場景級重建任務的實驗結果表明,我們的方法優於同等或更小模型尺寸的最先進方法,在表示細節方面實現了顯著改進。
English
Generalized feed-forward Gaussian models have achieved significant progress in sparse-view 3D reconstruction by leveraging prior knowledge from large multi-view datasets. However, these models often struggle to represent high-frequency details due to the limited number of Gaussians. While the densification strategy used in per-scene 3D Gaussian splatting (3D-GS) optimization can be adapted to the feed-forward models, it may not be ideally suited for generalized scenarios. In this paper, we propose Generative Densification, an efficient and generalizable method to densify Gaussians generated by feed-forward models. Unlike the 3D-GS densification strategy, which iteratively splits and clones raw Gaussian parameters, our method up-samples feature representations from the feed-forward models and generates their corresponding fine Gaussians in a single forward pass, leveraging the embedded prior knowledge for enhanced generalization. Experimental results on both object-level and scene-level reconstruction tasks demonstrate that our method outperforms state-of-the-art approaches with comparable or smaller model sizes, achieving notable improvements in representing fine details.

Summary

AI-Generated Summary

PDF192December 12, 2024