视频制作者:利用视频扩散模型的固有力量进行零样本定制视频生成
VideoMaker: Zero-shot Customized Video Generation with the Inherent Force of Video Diffusion Models
December 27, 2024
作者: Tao Wu, Yong Zhang, Xiaodong Cun, Zhongang Qi, Junfu Pu, Huanzhang Dou, Guangcong Zheng, Ying Shan, Xi Li
cs.AI
摘要
由于其巨大的应用潜力,零样本定制视频生成引起了广泛关注。现有方法依赖于额外的模型来提取和注入参考主题特征,假设仅靠视频扩散模型(VDM)是不足以进行零样本定制视频生成的。然而,这些方法通常由于次优的特征提取和注入技术而难以保持一致的主题外观。本文揭示了VDM固有地具有提取和注入主题特征的能力。我们摒弃了先前的启发式方法,引入了一种新颖的框架,利用VDM固有的力量实现高质量的零样本定制视频生成。具体而言,对于特征提取,我们直接将参考图像输入VDM,并利用其内在的特征提取过程,这不仅提供了细粒度特征,而且与VDM的预训练知识显著对齐。对于特征注入,我们通过VDM内的空间自注意力之间的创新的双向交互设计了一个机制,确保VDM在保持生成视频的多样性的同时具有更好的主题保真度。在定制人类和物体视频生成方面的实验验证了我们框架的有效性。
English
Zero-shot customized video generation has gained significant attention due to
its substantial application potential. Existing methods rely on additional
models to extract and inject reference subject features, assuming that the
Video Diffusion Model (VDM) alone is insufficient for zero-shot customized
video generation. However, these methods often struggle to maintain consistent
subject appearance due to suboptimal feature extraction and injection
techniques. In this paper, we reveal that VDM inherently possesses the force to
extract and inject subject features. Departing from previous heuristic
approaches, we introduce a novel framework that leverages VDM's inherent force
to enable high-quality zero-shot customized video generation. Specifically, for
feature extraction, we directly input reference images into VDM and use its
intrinsic feature extraction process, which not only provides fine-grained
features but also significantly aligns with VDM's pre-trained knowledge. For
feature injection, we devise an innovative bidirectional interaction between
subject features and generated content through spatial self-attention within
VDM, ensuring that VDM has better subject fidelity while maintaining the
diversity of the generated video.Experiments on both customized human and
object video generation validate the effectiveness of our framework.Summary
AI-Generated Summary