将高斯飞溅技术融入扩散去噪器中,以实现快速且可扩展的单阶段图像到三维生成。
Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation
November 21, 2024
作者: Yuanhao Cai, He Zhang, Kai Zhang, Yixun Liang, Mengwei Ren, Fujun Luan, Qing Liu, Soo Ye Kim, Jianming Zhang, Zhifei Zhang, Yuqian Zhou, Zhe Lin, Alan Yuille
cs.AI
摘要
现有的前馈图像到3D方法主要依赖于2D多视图扩散模型,这些模型无法保证3D一致性。这些方法在改变提示视角方向时很容易崩溃,并且主要处理以物体为中心的提示图像。在本文中,我们提出了一种新颖的单阶段3D扩散模型,DiffusionGS,用于从单个视角生成对象和场景。DiffusionGS直接在每个时间步输出3D高斯点云,以强制视角一致性,并允许模型在给定任何方向的提示视图时生成稳健的结果,超越了以物体为中心的输入。此外,为了提高DiffusionGS的能力和泛化能力,我们通过开发场景-对象混合训练策略来扩大3D训练数据。实验证明,我们的方法在生成质量上表现更好(PSNR高2.20 dB,FID低23.25),速度也快了5倍以上(在A100 GPU上约6秒),超过了现有技术的方法。用户研究和文本到3D应用还揭示了我们方法的实际价值。我们的项目页面位于https://caiyuanhao1998.github.io/project/DiffusionGS/,展示了视频和交互式生成结果。
English
Existing feed-forward image-to-3D methods mainly rely on 2D multi-view
diffusion models that cannot guarantee 3D consistency. These methods easily
collapse when changing the prompt view direction and mainly handle
object-centric prompt images. In this paper, we propose a novel single-stage 3D
diffusion model, DiffusionGS, for object and scene generation from a single
view. DiffusionGS directly outputs 3D Gaussian point clouds at each timestep to
enforce view consistency and allow the model to generate robustly given prompt
views of any directions, beyond object-centric inputs. Plus, to improve the
capability and generalization ability of DiffusionGS, we scale up 3D training
data by developing a scene-object mixed training strategy. Experiments show
that our method enjoys better generation quality (2.20 dB higher in PSNR and
23.25 lower in FID) and over 5x faster speed (~6s on an A100 GPU) than SOTA
methods. The user study and text-to-3D applications also reveals the practical
values of our method. Our Project page at
https://caiyuanhao1998.github.io/project/DiffusionGS/ shows the video and
interactive generation results.Summary
AI-Generated Summary