Florence-VL:利用生成式視覺編碼器和深度-廣度融合增強視覺語言模型
Florence-VL: Enhancing Vision-Language Models with Generative Vision Encoder and Depth-Breadth Fusion
December 5, 2024
作者: Jiuhai Chen, Jianwei Yang, Haiping Wu, Dianqi Li, Jianfeng Gao, Tianyi Zhou, Bin Xiao
cs.AI
摘要
我們提出了Florence-VL,這是一個新的多模式大型語言模型(MLLMs)系列,其豐富的視覺表示是由Florence-2生成式視覺基礎模型產生的。與廣泛使用的對比學習訓練的CLIP風格視覺Transformer不同,Florence-2能夠捕捉不同層次和方面的視覺特徵,更適應於各種下游任務。我們提出了一種新穎的特徵融合架構和創新的訓練配方,有效地將Florence-2的視覺特徵整合到預訓練的LLMs中,如Phi 3.5和LLama 3。特別是,我們提出了“深度廣度融合(DBFusion)”,用於融合從不同深度和多個提示下提取的視覺特徵。我們的模型訓練包括整個模型的端到端預訓練,然後是對投影層和LLM的微調,使用精心設計的多樣開源數據集的配方,其中包括高質量的圖像標題和指示調整對。我們對Florence-VL的視覺特徵進行了定量分析和可視化,展示了其在視覺-語言對齊方面優於流行的視覺編碼器的優勢,其中豐富的深度和廣度發揮了重要作用。Florence-VL在各種多模式和以視覺為中心的基準測試中,包括通用VQA、感知、幻覺、OCR、圖表、知識密集型理解等方面,均實現了對現有最先進MLLMs的顯著改進。為了促進未來研究,我們的模型和完整的訓練配方已開源。
https://github.com/JiuhaiChen/Florence-VL
English
We present Florence-VL, a new family of multimodal large language models
(MLLMs) with enriched visual representations produced by Florence-2, a
generative vision foundation model. Unlike the widely used CLIP-style vision
transformer trained by contrastive learning, Florence-2 can capture different
levels and aspects of visual features, which are more versatile to be adapted
to diverse downstream tasks. We propose a novel feature-fusion architecture and
an innovative training recipe that effectively integrates Florence-2's visual
features into pretrained LLMs, such as Phi 3.5 and LLama 3. In particular, we
propose "depth-breath fusion (DBFusion)" to fuse the visual features extracted
from different depths and under multiple prompts. Our model training is
composed of end-to-end pretraining of the whole model followed by finetuning of
the projection layer and the LLM, on a carefully designed recipe of diverse
open-source datasets that include high-quality image captions and
instruction-tuning pairs. Our quantitative analysis and visualization of
Florence-VL's visual features show its advantages over popular vision encoders
on vision-language alignment, where the enriched depth and breath play
important roles. Florence-VL achieves significant improvements over existing
state-of-the-art MLLMs across various multi-modal and vision-centric benchmarks
covering general VQA, perception, hallucination, OCR, Chart,
knowledge-intensive understanding, etc. To facilitate future research, our
models and the complete training recipe are open-sourced.
https://github.com/JiuhaiChen/Florence-VLSummary
AI-Generated Summary