AnimateAnything:视频生成中的一致可控动画
AnimateAnything: Consistent and Controllable Animation for Video Generation
November 16, 2024
作者: Guojun Lei, Chi Wang, Hong Li, Rong Zhang, Yikai Wang, Weiwei Xu
cs.AI
摘要
我们提出了一种统一的可控视频生成方法 AnimateAnything,有助于在各种条件下实现精确和一致的视频操作,包括摄像机轨迹、文本提示和用户动作注释。具体来说,我们精心设计了一个多尺度控制特征融合网络,用于构建不同条件下的通用运动表示。它将所有控制信息明确地转换为逐帧光流。然后,我们将光流作为运动先验融入最终的视频生成过程中。此外,为了减少大规模运动引起的闪烁问题,我们提出了一个基于频率的稳定模块。它通过确保视频的频域一致性来增强时间上的连贯性。实验证明,我们的方法优于现有技术。更多细节和视频,请访问网页:https://yu-shaonian.github.io/Animate_Anything/。
English
We present a unified controllable video generation approach AnimateAnything
that facilitates precise and consistent video manipulation across various
conditions, including camera trajectories, text prompts, and user motion
annotations. Specifically, we carefully design a multi-scale control feature
fusion network to construct a common motion representation for different
conditions. It explicitly converts all control information into frame-by-frame
optical flows. Then we incorporate the optical flows as motion priors to guide
final video generation. In addition, to reduce the flickering issues caused by
large-scale motion, we propose a frequency-based stabilization module. It can
enhance temporal coherence by ensuring the video's frequency domain
consistency. Experiments demonstrate that our method outperforms the
state-of-the-art approaches. For more details and videos, please refer to the
webpage: https://yu-shaonian.github.io/Animate_Anything/.Summary
AI-Generated Summary