示範式視頻創建

Video Creation by Demonstration

December 12, 2024
作者: Yihong Sun, Hao Zhou, Liangzhe Yuan, Jennifer J. Sun, Yandong Li, Xuhui Jia, Hartwig Adam, Bharath Hariharan, Long Zhao, Ting Liu
cs.AI

摘要

我們探索了一種新穎的視頻創作體驗,即通過示範進行視頻創作。給定一個示範視頻和來自不同場景的上下文圖像,我們生成一個在自然地延續上下文圖像並執行示範中動作概念的物理合理視頻。為實現這一功能,我們提出了delta-Diffusion,一種自監督訓練方法,通過有條件的未來幀預測從未標記的視頻中學習。與大多數現有的基於明確信號的視頻生成控制不同,我們採用了隱式潛在控制形式,以滿足一般視頻所需的最大靈活性和表達能力。通過在視頻基礎模型上設計外觀瓶頸,我們從示範視頻中提取動作潛在因素,以最小化外觀泄漏來條件生成過程。在實驗上,delta-Diffusion在人類偏好和大規模機器評估方面優於相關基準,並展示了對互動式世界模擬的潛力。生成的樣本視頻結果可在https://delta-diffusion.github.io/ 上找到。
English
We explore a novel video creation experience, namely Video Creation by Demonstration. Given a demonstration video and a context image from a different scene, we generate a physically plausible video that continues naturally from the context image and carries out the action concepts from the demonstration. To enable this capability, we present delta-Diffusion, a self-supervised training approach that learns from unlabeled videos by conditional future frame prediction. Unlike most existing video generation controls that are based on explicit signals, we adopts the form of implicit latent control for maximal flexibility and expressiveness required by general videos. By leveraging a video foundation model with an appearance bottleneck design on top, we extract action latents from demonstration videos for conditioning the generation process with minimal appearance leakage. Empirically, delta-Diffusion outperforms related baselines in terms of both human preference and large-scale machine evaluations, and demonstrates potentials towards interactive world simulation. Sampled video generation results are available at https://delta-diffusion.github.io/.

Summary

AI-Generated Summary

PDF82December 13, 2024