VideoUFO:面向文本生成视频的百万级用户关注数据集
VideoUFO: A Million-Scale User-Focused Dataset for Text-to-Video Generation
March 3, 2025
作者: Wenhao Wang, Yi Yang
cs.AI
摘要
文本到视频生成模型能够将文字提示转化为动态视觉内容,在电影制作、游戏开发及教育领域展现出广泛的应用潜力。然而,其实际表现常未能满足用户期待,主要原因之一在于这些模型未针对用户期望创作的某些主题进行视频训练。本文提出VideoUFO,这是首个专门围绕用户实际场景关注点精心构建的视频数据集。此外,VideoUFO还具备以下特点:(1)与现有视频数据集的重叠率极低(仅0.29%),(2)所有视频均通过YouTube官方API在Creative Commons许可下搜索获取。这两大特性为未来研究者拓宽训练数据来源提供了更大自由度。VideoUFO包含超过109万条视频片段,每条均配有简短与详细描述两种字幕。具体而言,我们首先通过聚类从百万级真实文本到视频提示数据集VidProM中识别出1,291个用户关注主题,随后基于这些主题从YouTube检索视频,将检索到的视频分割成片段,并为每个片段生成简短与详细描述。经过主题验证后,最终保留约109万条视频片段。实验表明:(1)当前16种文本到视频模型在所有用户关注主题上均未能实现一致性能;(2)在表现最差的主题上,基于VideoUFO训练的简单模型优于其他模型。该数据集已根据CC BY 4.0许可公开于https://huggingface.co/datasets/WenhaoWang/VideoUFO。
English
Text-to-video generative models convert textual prompts into dynamic visual
content, offering wide-ranging applications in film production, gaming, and
education. However, their real-world performance often falls short of user
expectations. One key reason is that these models have not been trained on
videos related to some topics users want to create. In this paper, we propose
VideoUFO, the first Video dataset specifically curated to align with Users'
FOcus in real-world scenarios. Beyond this, our VideoUFO also features: (1)
minimal (0.29%) overlap with existing video datasets, and (2) videos
searched exclusively via YouTube's official API under the Creative Commons
license. These two attributes provide future researchers with greater freedom
to broaden their training sources. The VideoUFO comprises over 1.09 million
video clips, each paired with both a brief and a detailed caption
(description). Specifically, through clustering, we first identify 1,291
user-focused topics from the million-scale real text-to-video prompt dataset,
VidProM. Then, we use these topics to retrieve videos from YouTube, split the
retrieved videos into clips, and generate both brief and detailed captions for
each clip. After verifying the clips with specified topics, we are left with
about 1.09 million video clips. Our experiments reveal that (1) current 16
text-to-video models do not achieve consistent performance across all
user-focused topics; and (2) a simple model trained on VideoUFO outperforms
others on worst-performing topics. The dataset is publicly available at
https://huggingface.co/datasets/WenhaoWang/VideoUFO under the CC BY 4.0
License.Summary
AI-Generated Summary