EVEv2:改进的无编码器视觉-语言模型基线
EVEv2: Improved Baselines for Encoder-Free Vision-Language Models
February 10, 2025
作者: Haiwen Diao, Xiaotong Li, Yufeng Cui, Yueze Wang, Haoge Deng, Ting Pan, Wenxuan Wang, Huchuan Lu, Xinlong Wang
cs.AI
摘要
现有的无编码器视觉-语言模型(VLMs)正在迅速缩小与基于编码器的对应模型之间的性能差距,突显了统一多模态系统具有结构简单性和高效部署潜力的前景。我们系统地澄清了使用预训练视觉编码器、离散分词器和从头开始的极简视觉层的VLMs之间的性能差距,深入挖掘了未经审查的无编码器VLMs的特征。我们开发了一种有效的策略,使无编码器VLMs能够与主流基于编码器的模型匹敌。经过深入调查,我们推出了EVEv2.0,这是一组新的改进的无编码器VLMs。我们表明:(i)在统一模型内适当分解和分层关联视觉和语言可以减少模态之间的干扰。(ii)良好设计的训练策略可以实现对无编码器VLMs的有效优化。通过广泛评估,我们的EVEv2.0代表了一个全面研究,用于开发跨模态的仅解码器架构,展示了卓越的数据效率和强大的视觉推理能力。代码公开可在以下链接获取:https://github.com/baaivision/EVE。
English
Existing encoder-free vision-language models (VLMs) are rapidly narrowing the
performance gap with their encoder-based counterparts, highlighting the
promising potential for unified multimodal systems with structural simplicity
and efficient deployment. We systematically clarify the performance gap between
VLMs using pre-trained vision encoders, discrete tokenizers, and minimalist
visual layers from scratch, deeply excavating the under-examined
characteristics of encoder-free VLMs. We develop efficient strategies for
encoder-free VLMs that rival mainstream encoder-based ones. After an in-depth
investigation, we launch EVEv2.0, a new and improved family of encoder-free
VLMs. We show that: (i) Properly decomposing and hierarchically associating
vision and language within a unified model reduces interference between
modalities. (ii) A well-designed training strategy enables effective
optimization for encoder-free VLMs. Through extensive evaluation, our EVEv2.0
represents a thorough study for developing a decoder-only architecture across
modalities, demonstrating superior data efficiency and strong vision-reasoning
capability. Code is publicly available at: https://github.com/baaivision/EVE.Summary
AI-Generated Summary