SplineGS:適用於單眼視頻的實時動態3D高斯模型的穩健運動自適應樣条
SplineGS: Robust Motion-Adaptive Spline for Real-Time Dynamic 3D Gaussians from Monocular Video
December 13, 2024
作者: Jongmin Park, Minh-Quan Viet Bui, Juan Luis Gonzalez Bello, Jaeho Moon, Jihyong Oh, Munchurl Kim
cs.AI
摘要
從野外單眼視頻中合成新奇視角具有挑戰性,這是由於場景動態和缺乏多視角線索。為了應對這一問題,我們提出了SplineGS,這是一個不依賴COLMAP的動態3D高斯飛灰(3DGS)框架,可從單眼視頻中進行高質量重建和快速渲染。其核心是一種新穎的運動自適應樣條(MAS)方法,該方法使用具有少量控制點的三次Hermite樣條來表示連續的動態3D高斯軌跡。對於MAS,我們引入了一種運動自適應控制點修剪(MACP)方法,以模擬每個動態3D高斯在不同運動中的變形,逐步修剪控制點同時保持動態建模完整性。此外,我們提出了一種聯合優化策略,用於相機參數估計和3D高斯屬性,利用光度和幾何一致性。這消除了對從運動中獲取結構的預處理的需求,增強了SplineGS在現實條件下的韌性。實驗表明,SplineGS在從單眼視頻中的動態場景中合成新視角的質量方面明顯優於最先進的方法,實現了數千倍更快的渲染速度。
English
Synthesizing novel views from in-the-wild monocular videos is challenging due
to scene dynamics and the lack of multi-view cues. To address this, we propose
SplineGS, a COLMAP-free dynamic 3D Gaussian Splatting (3DGS) framework for
high-quality reconstruction and fast rendering from monocular videos. At its
core is a novel Motion-Adaptive Spline (MAS) method, which represents
continuous dynamic 3D Gaussian trajectories using cubic Hermite splines with a
small number of control points. For MAS, we introduce a Motion-Adaptive Control
points Pruning (MACP) method to model the deformation of each dynamic 3D
Gaussian across varying motions, progressively pruning control points while
maintaining dynamic modeling integrity. Additionally, we present a joint
optimization strategy for camera parameter estimation and 3D Gaussian
attributes, leveraging photometric and geometric consistency. This eliminates
the need for Structure-from-Motion preprocessing and enhances SplineGS's
robustness in real-world conditions. Experiments show that SplineGS
significantly outperforms state-of-the-art methods in novel view synthesis
quality for dynamic scenes from monocular videos, achieving thousands times
faster rendering speed.Summary
AI-Generated Summary