MotiF:使用运动焦点损失使文本在图像动画中起作用

MotiF: Making Text Count in Image Animation with Motion Focal Loss

December 20, 2024
作者: Shijie Wang, Samaneh Azadi, Rohit Girdhar, Saketh Rambhatla, Chen Sun, Xi Yin
cs.AI

摘要

文本图像到视频(TI2V)生成旨在根据文本描述从图像生成视频,也被称为文本引导的图像动画。大多数现有方法在生成视频以与文本提示良好对齐时存在困难,特别是在指定运动时。为了克服这一局限性,我们引入了MotiF,这是一种简单而有效的方法,将模型的学习引导到具有更多运动的区域,从而改善文本对齐和运动生成。我们使用光流生成运动热图,并根据运动的强度加权损失。这一修改后的目标导致明显的改进,并补充了利用运动先验作为模型输入的现有方法。此外,由于缺乏用于评估TI2V生成的多样化基准,我们提出了TI2V Bench,这是一个包含320个图像文本对的数据集,用于进行稳健评估。我们提出了一个人类评估协议,要求注释者在选择两个视频之间的整体偏好后给出其理由。通过对TI2V Bench的全面评估,MotiF胜过九个开源模型,实现了72%的平均偏好。TI2V Bench发布在https://wang-sj16.github.io/motif/。
English
Text-Image-to-Video (TI2V) generation aims to generate a video from an image following a text description, which is also referred to as text-guided image animation. Most existing methods struggle to generate videos that align well with the text prompts, particularly when motion is specified. To overcome this limitation, we introduce MotiF, a simple yet effective approach that directs the model's learning to the regions with more motion, thereby improving the text alignment and motion generation. We use optical flow to generate a motion heatmap and weight the loss according to the intensity of the motion. This modified objective leads to noticeable improvements and complements existing methods that utilize motion priors as model inputs. Additionally, due to the lack of a diverse benchmark for evaluating TI2V generation, we propose TI2V Bench, a dataset consists of 320 image-text pairs for robust evaluation. We present a human evaluation protocol that asks the annotators to select an overall preference between two videos followed by their justifications. Through a comprehensive evaluation on TI2V Bench, MotiF outperforms nine open-sourced models, achieving an average preference of 72%. The TI2V Bench is released in https://wang-sj16.github.io/motif/.

Summary

AI-Generated Summary

PDF62December 25, 2024