ChatPaper.aiChatPaper

VideoGrain:时空注意力调制实现多粒度视频编辑

VideoGrain: Modulating Space-Time Attention for Multi-grained Video Editing

February 24, 2025
作者: Xiangpeng Yang, Linchao Zhu, Hehe Fan, Yi Yang
cs.AI

摘要

近期,扩散模型的显著进展极大地提升了视频生成与编辑的能力。然而,多粒度视频编辑——涵盖类别级、实例级及部分级修改——仍是一项艰巨挑战。多粒度编辑中的主要难点包括文本到区域控制的语义错位以及扩散模型内部特征的耦合问题。为解决这些难题,我们提出了VideoGrain,一种零样本方法,通过调控时空(交叉与自)注意力机制,实现对视频内容的精细控制。我们通过增强每个局部提示词在交叉注意力中对其对应空间解耦区域的关注,同时减少与无关区域的交互,从而优化了文本到区域的控制。此外,我们通过提升自注意力中的区域内感知并降低区域间干扰,改进了特征分离。大量实验证明,我们的方法在现实场景中达到了最先进的性能。我们的代码、数据及演示可在https://knightyxp.github.io/VideoGrain_project_page/获取。
English
Recent advancements in diffusion models have significantly improved video generation and editing capabilities. However, multi-grained video editing, which encompasses class-level, instance-level, and part-level modifications, remains a formidable challenge. The major difficulties in multi-grained editing include semantic misalignment of text-to-region control and feature coupling within the diffusion model. To address these difficulties, we present VideoGrain, a zero-shot approach that modulates space-time (cross- and self-) attention mechanisms to achieve fine-grained control over video content. We enhance text-to-region control by amplifying each local prompt's attention to its corresponding spatial-disentangled region while minimizing interactions with irrelevant areas in cross-attention. Additionally, we improve feature separation by increasing intra-region awareness and reducing inter-region interference in self-attention. Extensive experiments demonstrate our method achieves state-of-the-art performance in real-world scenarios. Our code, data, and demos are available at https://knightyxp.github.io/VideoGrain_project_page/

Summary

AI-Generated Summary

PDF734February 25, 2025