ChatPaper.aiChatPaper

描述萬物:精細的局部化圖像與視頻字幕生成

Describe Anything: Detailed Localized Image and Video Captioning

April 22, 2025
作者: Long Lian, Yifan Ding, Yunhao Ge, Sifei Liu, Hanzi Mao, Boyi Li, Marco Pavone, Ming-Yu Liu, Trevor Darrell, Adam Yala, Yin Cui
cs.AI

摘要

為圖像和視頻中的特定區域生成詳細且準確的描述,仍然是視覺語言模型面臨的基本挑戰。我們提出了描述任意模型(Describe Anything Model, DAM),這是一個專為詳細局部字幕生成(Detailed Localized Captioning, DLC)設計的模型。DAM通過兩項關鍵創新,既保留了局部細節,又兼顧了全局上下文:一是焦點提示(focal prompt),確保對目標區域進行高分辨率編碼;二是局部視覺骨幹(localized vision backbone),將精確定位與其更廣泛的上下文相結合。為解決高質量DLC數據稀缺的問題,我們提出了一種基於半監督學習(Semi-supervised Learning, SSL)的數據管道(DLC-SDP)。DLC-SDP從現有的分割數據集出發,利用SSL擴展到未標記的網絡圖像。我們還引入了DLC-Bench,這是一個旨在不依賴參考字幕的情況下評估DLC的基準。DAM在涵蓋關鍵詞級、短語級以及詳細多句子局部圖像和視頻字幕生成的7個基準測試中,均創下了新的最先進水平。
English
Generating detailed and accurate descriptions for specific regions in images and videos remains a fundamental challenge for vision-language models. We introduce the Describe Anything Model (DAM), a model designed for detailed localized captioning (DLC). DAM preserves both local details and global context through two key innovations: a focal prompt, which ensures high-resolution encoding of targeted regions, and a localized vision backbone, which integrates precise localization with its broader context. To tackle the scarcity of high-quality DLC data, we propose a Semi-supervised learning (SSL)-based Data Pipeline (DLC-SDP). DLC-SDP starts with existing segmentation datasets and expands to unlabeled web images using SSL. We introduce DLC-Bench, a benchmark designed to evaluate DLC without relying on reference captions. DAM sets new state-of-the-art on 7 benchmarks spanning keyword-level, phrase-level, and detailed multi-sentence localized image and video captioning.

Summary

AI-Generated Summary

PDF574April 23, 2025