InstanceCap:通过实例感知结构化字幕改进文本到视频生成

InstanceCap: Improving Text-to-Video Generation via Instance-aware Structured Caption

December 12, 2024
作者: Tiehan Fan, Kepan Nan, Rui Xie, Penghao Zhou, Zhenheng Yang, Chaoyou Fu, Xiang Li, Jian Yang, Ying Tai
cs.AI

摘要

近年来,文本到视频生成技术迅速发展,取得了显著成果。训练通常依赖于视频-字幕配对数据,这在提升生成性能方面起着至关重要的作用。然而,当前的视频字幕往往存在细节不足、虚构和运动描述不精确等问题,影响了生成视频的保真度和一致性。在这项工作中,我们提出了一种新颖的实例感知结构化字幕框架,称为InstanceCap,首次实现了实例级和细粒度视频字幕。基于这一方案,我们设计了一个辅助模型集群,将原始视频转换为实例以增强实例保真度。视频实例进一步用于将密集提示精炼为结构化短语,实现简洁而精确的描述。此外,我们策划了一个包含2.2万个实例视频的InstanceVid数据集用于训练,并提出了一个针对InstanceCap结构量身定制的增强流程用于推断。实验结果表明,我们提出的InstanceCap明显优于先前的模型,在确保字幕和视频之间高度保真的同时减少了虚构现象。
English
Text-to-video generation has evolved rapidly in recent years, delivering remarkable results. Training typically relies on video-caption paired data, which plays a crucial role in enhancing generation performance. However, current video captions often suffer from insufficient details, hallucinations and imprecise motion depiction, affecting the fidelity and consistency of generated videos. In this work, we propose a novel instance-aware structured caption framework, termed InstanceCap, to achieve instance-level and fine-grained video caption for the first time. Based on this scheme, we design an auxiliary models cluster to convert original video into instances to enhance instance fidelity. Video instances are further used to refine dense prompts into structured phrases, achieving concise yet precise descriptions. Furthermore, a 22K InstanceVid dataset is curated for training, and an enhancement pipeline that tailored to InstanceCap structure is proposed for inference. Experimental results demonstrate that our proposed InstanceCap significantly outperform previous models, ensuring high fidelity between captions and videos while reducing hallucinations.

Summary

AI-Generated Summary

PDF193December 16, 2024