VideoAnydoor:具有精確運動控制的高保真度視頻物件插入
VideoAnydoor: High-fidelity Video Object Insertion with Precise Motion Control
January 2, 2025
作者: Yuanpeng Tu, Hao Luo, Xi Chen, Sihui Ji, Xiang Bai, Hengshuang Zhao
cs.AI
摘要
儘管在影片生成方面取得了顯著進展,將特定物件插入影片仍然是一項具有挑戰性的任務。困難在於同時保留參考物件的外觀細節並準確建模一致的運動。本文提出了VideoAnydoor,一種零樣本影片物件插入框架,具有高保真度的細節保留和精確的運動控制。從文本到影片模型開始,我們利用ID提取器注入全局身份,並利用方框序列控制整體運動。為了保留詳細的外觀,同時支持精細的運動控制,我們設計了像素變形器。它以任意關鍵點的參考影像和相應的關鍵點軌跡作為輸入。根據軌跡扭曲像素細節,並將扭曲的特徵與擴散 U-Net 融合,從而提高細節保留並支持用戶操控運動軌跡。此外,我們提出了一種訓練策略,涉及影片和靜態影像,並使用重新加權重建損失來增強插入質量。VideoAnydoor顯示出明顯優於現有方法的優勢,並自然支持各種下游應用(例如生成談話頭像,影片虛擬試穿,多區域編輯),而無需特定任務的微調。
English
Despite significant advancements in video generation, inserting a given
object into videos remains a challenging task. The difficulty lies in
preserving the appearance details of the reference object and accurately
modeling coherent motions at the same time. In this paper, we propose
VideoAnydoor, a zero-shot video object insertion framework with high-fidelity
detail preservation and precise motion control. Starting from a text-to-video
model, we utilize an ID extractor to inject the global identity and leverage a
box sequence to control the overall motion. To preserve the detailed appearance
and meanwhile support fine-grained motion control, we design a pixel warper. It
takes the reference image with arbitrary key-points and the corresponding
key-point trajectories as inputs. It warps the pixel details according to the
trajectories and fuses the warped features with the diffusion U-Net, thus
improving detail preservation and supporting users in manipulating the motion
trajectories. In addition, we propose a training strategy involving both videos
and static images with a reweight reconstruction loss to enhance insertion
quality. VideoAnydoor demonstrates significant superiority over existing
methods and naturally supports various downstream applications (e.g., talking
head generation, video virtual try-on, multi-region editing) without
task-specific fine-tuning.Summary
AI-Generated Summary