通过稀疏时变属性建模实现单目动态场景渲染的高效高斯泼溅技术
Efficient Gaussian Splatting for Monocular Dynamic Scene Rendering via Sparse Time-Variant Attribute Modeling
February 27, 2025
作者: Hanyang Kong, Xingyi Yang, Xinchao Wang
cs.AI
摘要
从单目视频中渲染动态场景是一项关键但极具挑战性的任务。最近提出的可变形高斯溅射技术已成为表示现实世界动态场景的强有力解决方案。然而,该方法往往会产生大量冗余的高斯分布,试图在多个时间步长上拟合每一帧训练视图,从而导致渲染速度变慢。此外,静态区域中高斯分布的属性是时间不变的,因此无需对每个高斯分布进行建模,否则可能导致静态区域出现抖动。实际上,动态场景渲染速度的主要瓶颈在于高斯分布的数量。为此,我们提出了高效动态高斯溅射(EDGS),通过稀疏的时间变化属性建模来表示动态场景。我们的方法采用稀疏锚点网格表示来构建动态场景,并通过经典核表示计算密集高斯分布的运动流。此外,我们提出了一种无监督策略,以高效地过滤掉与静态区域对应的锚点。仅将与可变形物体相关的锚点输入到多层感知机(MLPs)中,以查询时间变化的属性。在两个真实世界数据集上的实验表明,与之前最先进的方法相比,我们的EDGS在显著提升渲染速度的同时,还保持了卓越的渲染质量。
English
Rendering dynamic scenes from monocular videos is a crucial yet challenging
task. The recent deformable Gaussian Splatting has emerged as a robust solution
to represent real-world dynamic scenes. However, it often leads to heavily
redundant Gaussians, attempting to fit every training view at various time
steps, leading to slower rendering speeds. Additionally, the attributes of
Gaussians in static areas are time-invariant, making it unnecessary to model
every Gaussian, which can cause jittering in static regions. In practice, the
primary bottleneck in rendering speed for dynamic scenes is the number of
Gaussians. In response, we introduce Efficient Dynamic Gaussian Splatting
(EDGS), which represents dynamic scenes via sparse time-variant attribute
modeling. Our approach formulates dynamic scenes using a sparse anchor-grid
representation, with the motion flow of dense Gaussians calculated via a
classical kernel representation. Furthermore, we propose an unsupervised
strategy to efficiently filter out anchors corresponding to static areas. Only
anchors associated with deformable objects are input into MLPs to query
time-variant attributes. Experiments on two real-world datasets demonstrate
that our EDGS significantly improves the rendering speed with superior
rendering quality compared to previous state-of-the-art methods.Summary
AI-Generated Summary