ChatPaper.aiChatPaper

全能创作者:具有通用编辑功能的自监督统一生成

OmniCreator: Self-Supervised Unified Generation with Universal Editing

December 3, 2024
作者: Haodong Chen, Lan Wang, Harry Yang, Ser-Nam Lim
cs.AI

摘要

我们介绍了一种新颖的框架 OmniCreator,能够在一个地方进行文本提示统一的(图像+视频)生成和编辑。OmniCreator以自监督方式获得生成和通用编辑能力,以原始文本-视频对作为条件,同时利用同一视频作为去噪目标,学习视频和文本之间的语义对应关系。在推断过程中,当提供文本提示和视频时,OmniCreator能够生成符合两者的目标,实现一种无约束的通用编辑效果,与现有主要关注某些编辑类型或依赖额外控制(例如结构条件、注意特征或DDIM反演)的编辑工作形成对比。另一方面,当仅提供文本提示时,OmniCreator变为生成型,通过学习到的语义对应关系生成高质量视频。重要的是,我们发现相同的能力也适用于图像,使OmniCreator成为一个真正统一的框架。此外,由于缺乏现有生成视频编辑基准,我们介绍了 OmniBench-99 数据集,旨在全面评估生成视频编辑模型的性能。大量实验证明,OmniCreator在所有其他模型上表现出显著优势。
English
We introduce OmniCreator, a novel framework that can conduct text-prompted unified (image+video) generation as well as editing all in one place. OmniCreator acquires generative and universal editing capabilities in a self-supervised manner, taking original text-video pairs as conditions while utilizing the same video as a denoising target to learn the semantic correspondence between video and text. During inference, when presented with a text prompt and a video, OmniCreator is capable of generating a target that is faithful to both, achieving a universal editing effect that is unconstrained as opposed to existing editing work that primarily focuses on certain editing types or relies on additional controls (e.g., structural conditions, attention features, or DDIM inversion). On the other hand, when presented with a text prompt only, OmniCreator becomes generative, producing high-quality video as a result of the semantic correspondence learned. Importantly, we found that the same capabilities extend to images as is, making OmniCreator a truly unified framework. Further, due to the lack of existing generative video editing benchmarks, we introduce the OmniBench-99 dataset, designed to evaluate the performance of generative video editing models comprehensively. Extensive experiments demonstrate that OmniCreator exhibits substantial superiority over all other models.

Summary

AI-Generated Summary

PDF143December 4, 2024