探索多粒度概念標註以應用於多模態大型語言模型
Exploring Multi-Grained Concept Annotations for Multimodal Large Language Models
December 8, 2024
作者: Xiao Xu, Tianhao Niu, Yuxi Xie, Libo Qin, Wanxiang Che, Min-Yen Kan
cs.AI
摘要
多模式大型語言模型(MLLMs)在視覺-語言任務中表現出色,僅通過對粗粒度概念標註(例如圖像標題)進行預訓練。我們假設整合細粒度概念標註(例如對象標籤和對象區域)將進一步提高性能,因為這兩種數據粒度在概念表示的廣度和深度方面互補。我們為MLLMs引入了一個新的數據集,其中包含多模式多粒度概念標註(MMGiC)。在構建MMGiC時,我們探討了不同數據配方對多模式理解和生成的影響。我們的分析顯示,多粒度概念標註在我們的結構化模板和通用MLLM框架下相互整合和互補。我們清晰地探索並展示了MMGiC幫助MLLMs更好地定位和學習概念的潛力,實現視覺和語言在多個粒度上的對齊。通過研究MMGiC與圖像-標題數據在12個多模式理解和生成基準測試中的公平比較和有效協作,我們進一步驗證了我們的假設,例如它們的適當組合在POPE和SEED-Bench上相對於僅圖像-標題數據可以實現3.95%和2.34%的絕對改進。代碼、數據和模型將在https://github.com/LooperXX/MMGiC 上提供。
English
Multimodal Large Language Models (MLLMs) excel in vision--language tasks by
pre-training solely on coarse-grained concept annotations (e.g., image
captions). We hypothesize that integrating fine-grained concept annotations
(e.g., object labels and object regions) will further improve performance, as
both data granularities complement each other in terms of breadth and depth in
concept representation. We introduce a new dataset featuring Multimodal
Multi-Grained Concept annotations (MMGiC) for MLLMs. In constructing MMGiC, we
explore the impact of different data recipes on multimodal comprehension and
generation. Our analyses reveal that multi-grained concept annotations
integrate and complement each other, under our structured template and a
general MLLM framework. We clearly explore and demonstrate the potential of
MMGiC to help MLLMs better locate and learn concepts, aligning vision and
language at multiple granularities. We further validate our hypothesis by
investigating the fair comparison and effective collaboration between MMGiC and
image--caption data on 12 multimodal comprehension and generation benchmarks,
e.g., their appropriate combination achieve 3.95% and 2.34% absolute
improvements over image--caption data alone on POPE and SEED-Bench. Code, data
and models will be available at https://github.com/LooperXX/MMGiC.Summary
AI-Generated Summary