利用視頻擴散先驗進行新視角外推
Novel View Extrapolation with Video Diffusion Priors
November 21, 2024
作者: Kunhao Liu, Ling Shao, Shijian Lu
cs.AI
摘要
由於輝度場方法的發展,新視角合成領域取得了重大進展。然而,大多數輝度場技術在新視角插值方面表現遠優於新視角外推,即合成的新視角遠超出觀察訓練視角。我們設計了ViewExtrapolator,這是一種新視角合成方法,利用穩定視頻擴散(SVD)的生成先驗進行逼真的新視角外推。通過重新設計SVD去噪過程,ViewExtrapolator改進了輝度場渲染的容易產生瑕疵的視角,極大地提升了合成新視角的清晰度和逼真度。ViewExtrapolator是一種通用的新視角外推器,可與不同類型的3D渲染一起使用,例如從點雲渲染的視角,當僅有單個視角或單眼視頻可用時。此外,ViewExtrapolator無需對SVD進行微調,既節省數據又節省計算資源。大量實驗證明了ViewExtrapolator在新視角外推方面的優越性。項目頁面:https://kunhao-liu.github.io/ViewExtrapolator/。
English
The field of novel view synthesis has made significant strides thanks to the
development of radiance field methods. However, most radiance field techniques
are far better at novel view interpolation than novel view extrapolation where
the synthesis novel views are far beyond the observed training views. We design
ViewExtrapolator, a novel view synthesis approach that leverages the generative
priors of Stable Video Diffusion (SVD) for realistic novel view extrapolation.
By redesigning the SVD denoising process, ViewExtrapolator refines the
artifact-prone views rendered by radiance fields, greatly enhancing the clarity
and realism of the synthesized novel views. ViewExtrapolator is a generic novel
view extrapolator that can work with different types of 3D rendering such as
views rendered from point clouds when only a single view or monocular video is
available. Additionally, ViewExtrapolator requires no fine-tuning of SVD,
making it both data-efficient and computation-efficient. Extensive experiments
demonstrate the superiority of ViewExtrapolator in novel view extrapolation.
Project page: https://kunhao-liu.github.io/ViewExtrapolator/.Summary
AI-Generated Summary