AuraFusion360:用于基于参考的360°无边界场景修复的增强看不见区域对齐
AuraFusion360: Augmented Unseen Region Alignment for Reference-based 360° Unbounded Scene Inpainting
February 7, 2025
作者: Chung-Ho Wu, Yang-Jung Chen, Ying-Huan Chen, Jie-Ying Lee, Bo-Hsu Ke, Chun-Wei Tuan Mu, Yi-Chuan Huang, Chin-Yang Lin, Min-Hung Chen, Yen-Yu Lin, Yu-Lun Liu
cs.AI
摘要
三维场景修复对于从虚拟现实到建筑可视化的应用至关重要,然而现有方法在处理360度无边界场景时往往存在视角一致性和几何精度方面的困难。我们提出了AuraFusion360,这是一种新颖的基于参考的方法,能够在高斯散射表示的3D场景中实现高质量的物体去除和孔洞填充。我们的方法引入了以下特性:(1) 用于准确遮挡识别的深度感知未见掩模生成,(2) 自适应引导深度扩散,这是一种零样本方法,可实现准确的初始点放置而无需额外训练,以及(3) 基于SDEdit的细节增强,以实现多视角一致性。我们还介绍了360-USID,这是第一个针对360度无边界场景修复的全面数据集,包含地面真实数据。大量实验证明,AuraFusion360明显优于现有方法,在保持几何精度的同时实现了卓越的感知质量,适应了戏剧性视角变化。请访问我们的项目页面以查看视频结果和数据集:https://kkennethwu.github.io/aurafusion360/。
English
Three-dimensional scene inpainting is crucial for applications from virtual
reality to architectural visualization, yet existing methods struggle with view
consistency and geometric accuracy in 360{\deg} unbounded scenes. We present
AuraFusion360, a novel reference-based method that enables high-quality object
removal and hole filling in 3D scenes represented by Gaussian Splatting. Our
approach introduces (1) depth-aware unseen mask generation for accurate
occlusion identification, (2) Adaptive Guided Depth Diffusion, a zero-shot
method for accurate initial point placement without requiring additional
training, and (3) SDEdit-based detail enhancement for multi-view coherence. We
also introduce 360-USID, the first comprehensive dataset for 360{\deg}
unbounded scene inpainting with ground truth. Extensive experiments demonstrate
that AuraFusion360 significantly outperforms existing methods, achieving
superior perceptual quality while maintaining geometric accuracy across
dramatic viewpoint changes. See our project page for video results and the
dataset at https://kkennethwu.github.io/aurafusion360/.Summary
AI-Generated Summary