SplineGS:用于实时动态3D高斯模型的鲁棒运动自适应样条线从单目视频

SplineGS: Robust Motion-Adaptive Spline for Real-Time Dynamic 3D Gaussians from Monocular Video

December 13, 2024
作者: Jongmin Park, Minh-Quan Viet Bui, Juan Luis Gonzalez Bello, Jaeho Moon, Jihyong Oh, Munchurl Kim
cs.AI

摘要

从野外单目视频中合成新颖视角具有挑战性,这是由于场景动态性和缺乏多视角线索所致。为了解决这一问题,我们提出了SplineGS,这是一个无需COLMAP的动态3D高斯飞溅(3DGS)框架,用于从单目视频中进行高质量重建和快速渲染。其核心是一种新颖的运动自适应样条(MAS)方法,它使用具有少量控制点的三次Hermite样条表示连续动态的3D高斯轨迹。对于MAS,我们引入了一种运动自适应控制点修剪(MACP)方法,用于对每个动态3D高斯在不同运动中的变形进行建模,逐渐修剪控制点同时保持动态建模的完整性。此外,我们提出了一种联合优化策略,用于相机参数估计和3D高斯属性,利用光度和几何一致性。这消除了对结构运动预处理的需求,并增强了SplineGS在真实世界条件下的稳健性。实验证明,SplineGS在从单目视频中的动态场景中合成新视角的质量方面明显优于最先进的方法,实现了数千倍更快的渲染速度。
English
Synthesizing novel views from in-the-wild monocular videos is challenging due to scene dynamics and the lack of multi-view cues. To address this, we propose SplineGS, a COLMAP-free dynamic 3D Gaussian Splatting (3DGS) framework for high-quality reconstruction and fast rendering from monocular videos. At its core is a novel Motion-Adaptive Spline (MAS) method, which represents continuous dynamic 3D Gaussian trajectories using cubic Hermite splines with a small number of control points. For MAS, we introduce a Motion-Adaptive Control points Pruning (MACP) method to model the deformation of each dynamic 3D Gaussian across varying motions, progressively pruning control points while maintaining dynamic modeling integrity. Additionally, we present a joint optimization strategy for camera parameter estimation and 3D Gaussian attributes, leveraging photometric and geometric consistency. This eliminates the need for Structure-from-Motion preprocessing and enhances SplineGS's robustness in real-world conditions. Experiments show that SplineGS significantly outperforms state-of-the-art methods in novel view synthesis quality for dynamic scenes from monocular videos, achieving thousands times faster rendering speed.

Summary

AI-Generated Summary

PDF73December 18, 2024