ChatPaper.aiChatPaper

Florence-VL:利用生成视觉编码器和深度-广度融合增强视觉语言模型

Florence-VL: Enhancing Vision-Language Models with Generative Vision Encoder and Depth-Breadth Fusion

December 5, 2024
作者: Jiuhai Chen, Jianwei Yang, Haiping Wu, Dianqi Li, Jianfeng Gao, Tianyi Zhou, Bin Xiao
cs.AI

摘要

我们提出了Florence-VL,这是一系列新的多模态大型语言模型(MLLMs),其具有由Florence-2生成的丰富视觉表示。与广泛使用的通过对比学习训练的CLIP风格视觉Transformer不同,Florence-2能够捕捉不同层次和方面的视觉特征,更适用于适应多样化的下游任务。我们提出了一种新颖的特征融合架构和创新的训练方法,有效地将Florence-2的视觉特征整合到预训练的LLMs中,如Phi 3.5和LLama 3。特别地,我们提出了“深度-广度融合(DBFusion)”来融合从不同深度和多个提示下提取的视觉特征。我们的模型训练包括对整个模型进行端到端的预训练,然后对投影层和LLM进行微调,使用精心设计的多样化开源数据集,其中包括高质量的图像标题和指令调整对。我们对Florence-VL的视觉特征进行了定量分析和可视化,展示了其在视觉-语言对齐方面优于流行的视觉编码器的优势,其中丰富的深度和广度发挥着重要作用。Florence-VL在各种多模态和以视觉为中心的基准测试中均取得了显著的改进,涵盖了通用VQA、感知、幻觉、OCR、图表、知识密集型理解等。为了促进未来的研究,我们的模型和完整的训练方法已开源。 https://github.com/JiuhaiChen/Florence-VL
English
We present Florence-VL, a new family of multimodal large language models (MLLMs) with enriched visual representations produced by Florence-2, a generative vision foundation model. Unlike the widely used CLIP-style vision transformer trained by contrastive learning, Florence-2 can capture different levels and aspects of visual features, which are more versatile to be adapted to diverse downstream tasks. We propose a novel feature-fusion architecture and an innovative training recipe that effectively integrates Florence-2's visual features into pretrained LLMs, such as Phi 3.5 and LLama 3. In particular, we propose "depth-breath fusion (DBFusion)" to fuse the visual features extracted from different depths and under multiple prompts. Our model training is composed of end-to-end pretraining of the whole model followed by finetuning of the projection layer and the LLM, on a carefully designed recipe of diverse open-source datasets that include high-quality image captions and instruction-tuning pairs. Our quantitative analysis and visualization of Florence-VL's visual features show its advantages over popular vision encoders on vision-language alignment, where the enriched depth and breath play important roles. Florence-VL achieves significant improvements over existing state-of-the-art MLLMs across various multi-modal and vision-centric benchmarks covering general VQA, perception, hallucination, OCR, Chart, knowledge-intensive understanding, etc. To facilitate future research, our models and the complete training recipe are open-sourced. https://github.com/JiuhaiChen/Florence-VL

Summary

AI-Generated Summary

PDF634December 6, 2024