4Real-Video:學習通用的逼真照片級4D視頻擴散
4Real-Video: Learning Generalizable Photo-Realistic 4D Video Diffusion
December 5, 2024
作者: Chaoyang Wang, Peiye Zhuang, Tuan Duc Ngo, Willi Menapace, Aliaksandr Siarohin, Michael Vasilkovsky, Ivan Skorokhodov, Sergey Tulyakov, Peter Wonka, Hsin-Ying Lee
cs.AI
摘要
我們提出了4Real-Video,一個新穎的框架,用於生成4D視頻,組織為一個包含時間和觀點軸的視頻幀網格。在這個網格中,每一行包含共享相同時間步長的幀,而每一列包含來自相同觀點的幀。我們提出了一種新穎的雙流架構。一個流在列上執行觀點更新,另一個流在行上執行時間更新。在每個擴散變壓器層之後,一個同步層在兩個令牌流之間交換信息。我們提出了同步層的兩種實現,分別使用硬同步或軟同步。這種前向架構在三個方面改進了以前的工作:更高的推理速度,增強的視覺質量(通過FVD、CLIP和VideoScore測量),以及改善的時間和觀點一致性(通過VideoScore和Dust3R-Confidence測量)。
English
We propose 4Real-Video, a novel framework for generating 4D videos, organized
as a grid of video frames with both time and viewpoint axes. In this grid, each
row contains frames sharing the same timestep, while each column contains
frames from the same viewpoint. We propose a novel two-stream architecture. One
stream performs viewpoint updates on columns, and the other stream performs
temporal updates on rows. After each diffusion transformer layer, a
synchronization layer exchanges information between the two token streams. We
propose two implementations of the synchronization layer, using either hard or
soft synchronization. This feedforward architecture improves upon previous work
in three ways: higher inference speed, enhanced visual quality (measured by
FVD, CLIP, and VideoScore), and improved temporal and viewpoint consistency
(measured by VideoScore and Dust3R-Confidence).Summary
AI-Generated Summary