描述万物:精细化的局部图像与视频字幕生成
Describe Anything: Detailed Localized Image and Video Captioning
April 22, 2025
作者: Long Lian, Yifan Ding, Yunhao Ge, Sifei Liu, Hanzi Mao, Boyi Li, Marco Pavone, Ming-Yu Liu, Trevor Darrell, Adam Yala, Yin Cui
cs.AI
摘要
为图像和视频中的特定区域生成详尽且准确的描述,仍然是视觉-语言模型面临的一项基础性挑战。我们提出了“描述万物模型”(Describe Anything Model, DAM),该模型专为精细的局部描述(Detailed Localized Captioning, DLC)而设计。DAM通过两项关键创新,既保留了局部细节又兼顾了全局上下文:一是焦点提示机制,确保对目标区域进行高分辨率编码;二是局部视觉骨干网络,将精确定位与其更广泛的上下文相融合。针对高质量DLC数据稀缺的问题,我们提出了一种基于半监督学习(Semi-supervised Learning, SSL)的数据处理流程(DLC-SDP)。DLC-SDP从现有的分割数据集出发,利用SSL扩展至未标注的网络图像。我们还引入了DLC-Bench,这是一个旨在不依赖参考描述的情况下评估DLC性能的基准测试。DAM在涵盖关键词级别、短语级别及详细多句描述的局部图像与视频描述共7个基准测试中,均创下了新的最优成绩。
English
Generating detailed and accurate descriptions for specific regions in images
and videos remains a fundamental challenge for vision-language models. We
introduce the Describe Anything Model (DAM), a model designed for detailed
localized captioning (DLC). DAM preserves both local details and global context
through two key innovations: a focal prompt, which ensures high-resolution
encoding of targeted regions, and a localized vision backbone, which integrates
precise localization with its broader context. To tackle the scarcity of
high-quality DLC data, we propose a Semi-supervised learning (SSL)-based Data
Pipeline (DLC-SDP). DLC-SDP starts with existing segmentation datasets and
expands to unlabeled web images using SSL. We introduce DLC-Bench, a benchmark
designed to evaluate DLC without relying on reference captions. DAM sets new
state-of-the-art on 7 benchmarks spanning keyword-level, phrase-level, and
detailed multi-sentence localized image and video captioning.Summary
AI-Generated Summary