VideoAnydoor:具有精确运动控制的高保真视频对象插入
VideoAnydoor: High-fidelity Video Object Insertion with Precise Motion Control
January 2, 2025
作者: Yuanpeng Tu, Hao Luo, Xi Chen, Sihui Ji, Xiang Bai, Hengshuang Zhao
cs.AI
摘要
尽管视频生成取得了重大进展,但将特定对象插入视频仍然是一项具有挑战性的任务。困难在于同时保留参考对象的外观细节并准确建模连贯运动。本文提出了VideoAnydoor,这是一个零样本视频对象插入框架,具有高保真度细节保留和精确运动控制。从文本到视频模型开始,我们利用一个ID提取器注入全局身份,并利用一个框序列控制整体运动。为了保留详细外观并同时支持细粒度运动控制,我们设计了一个像素变形器。它以任意关键点的参考图像和相应的关键点轨迹作为输入。根据轨迹扭曲像素细节,并将扭曲的特征与扩散U-Net融合,从而提高细节保留并支持用户操纵运动轨迹。此外,我们提出了一种训练策略,涉及视频和静态图像,采用重新加权重构损失以增强插入质量。VideoAnydoor在现有方法上表现出显著优势,并自然支持各种下游应用(例如生成言语头像、视频虚拟试穿、多区域编辑)而无需任务特定的微调。
English
Despite significant advancements in video generation, inserting a given
object into videos remains a challenging task. The difficulty lies in
preserving the appearance details of the reference object and accurately
modeling coherent motions at the same time. In this paper, we propose
VideoAnydoor, a zero-shot video object insertion framework with high-fidelity
detail preservation and precise motion control. Starting from a text-to-video
model, we utilize an ID extractor to inject the global identity and leverage a
box sequence to control the overall motion. To preserve the detailed appearance
and meanwhile support fine-grained motion control, we design a pixel warper. It
takes the reference image with arbitrary key-points and the corresponding
key-point trajectories as inputs. It warps the pixel details according to the
trajectories and fuses the warped features with the diffusion U-Net, thus
improving detail preservation and supporting users in manipulating the motion
trajectories. In addition, we propose a training strategy involving both videos
and static images with a reweight reconstruction loss to enhance insertion
quality. VideoAnydoor demonstrates significant superiority over existing
methods and naturally supports various downstream applications (e.g., talking
head generation, video virtual try-on, multi-region editing) without
task-specific fine-tuning.Summary
AI-Generated Summary