LLaVA-Mini:高效的图像和视频大型多模态模型,仅含一个视觉令牌
LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token
January 7, 2025
作者: Shaolei Zhang, Qingkai Fang, Zhe Yang, Yang Feng
cs.AI
摘要
实时大型多模型(LMM)的出现,如GPT-4o,引发了对高效LMM的极大兴趣。LMM框架通常将视觉输入编码为视觉令牌(连续表示),并将它们与文本指令整合到大型语言模型(LLMs)的上下文中,其中大规模参数和大量上下文令牌(主要是视觉令牌)导致了大量的计算开销。以往对高效LMM的努力总是集中在用较小的模型替换LLM骨干,而忽视了令牌数量的关键问题。本文介绍了LLaVA-Mini,一种具有最少视觉令牌的高效LMM。为了实现视觉令牌的高压缩比,同时保留视觉信息,我们首先分析了LMM如何理解视觉令牌,并发现大多数视觉令牌仅在LLM骨干的早期层中起关键作用,主要将视觉信息融合到文本令牌中。基于这一发现,LLaVA-Mini引入了模态预融合,提前将视觉信息融合到文本令牌中,从而促进了将馈送到LLM骨干的视觉令牌的极端压缩为一个令牌。LLaVA-Mini是一个统一的大型多模型,可以高效地支持图像、高分辨率图像和视频的理解。对11个基于图像和7个基于视频的基准进行的实验表明,LLaVA-Mini在仅使用1个视觉令牌而不是576个的情况下优于LLaVA-v1.5。效率分析显示,LLaVA-Mini可以将FLOPs减少77%,在40毫秒内提供低延迟响应,并且在具有24GB内存的GPU硬件上处理超过10,000帧的视频。
English
The advent of real-time large multimodal models (LMMs) like GPT-4o has
sparked considerable interest in efficient LMMs. LMM frameworks typically
encode visual inputs into vision tokens (continuous representations) and
integrate them and textual instructions into the context of large language
models (LLMs), where large-scale parameters and numerous context tokens
(predominantly vision tokens) result in substantial computational overhead.
Previous efforts towards efficient LMMs always focus on replacing the LLM
backbone with smaller models, while neglecting the crucial issue of token
quantity. In this paper, we introduce LLaVA-Mini, an efficient LMM with minimal
vision tokens. To achieve a high compression ratio of vision tokens while
preserving visual information, we first analyze how LMMs understand vision
tokens and find that most vision tokens only play a crucial role in the early
layers of LLM backbone, where they mainly fuse visual information into text
tokens. Building on this finding, LLaVA-Mini introduces modality pre-fusion to
fuse visual information into text tokens in advance, thereby facilitating the
extreme compression of vision tokens fed to LLM backbone into one token.
LLaVA-Mini is a unified large multimodal model that can support the
understanding of images, high-resolution images, and videos in an efficient
manner. Experiments across 11 image-based and 7 video-based benchmarks
demonstrate that LLaVA-Mini outperforms LLaVA-v1.5 with just 1 vision token
instead of 576. Efficiency analyses reveal that LLaVA-Mini can reduce FLOPs by
77%, deliver low-latency responses within 40 milliseconds, and process over
10,000 frames of video on the GPU hardware with 24GB of memory.Summary
AI-Generated Summary