LLaVA-Mini:具有單一視覺標記的高效圖像和視頻大型多模型。
LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token
January 7, 2025
作者: Shaolei Zhang, Qingkai Fang, Zhe Yang, Yang Feng
cs.AI
摘要
隨著 GPT-4o 等實時大型多模型(LMM)的出現,引發了對高效 LMM 的廣泛興趣。LMM 框架通常將視覺輸入編碼為視覺標記(連續表示),並將它們與文本指令整合到大型語言模型(LLMs)的上下文中,其中大規模參數和眾多上下文標記(主要是視覺標記)導致了大量的計算開銷。以往對高效 LMM 的努力總是集中在用較小的模型替換 LLM 骨幹,而忽略了標記數量的關鍵問題。本文介紹了 LLaVA-Mini,一種具有最少視覺標記的高效 LMM。為了實現視覺標記的高壓縮比率並保留視覺信息,我們首先分析了 LMM 如何理解視覺標記,發現大多數視覺標記僅在 LLM 骨幹的早期層中發揮關鍵作用,主要將視覺信息融入文本標記中。基於這一發現,LLaVA-Mini 引入了模態預融合,提前將視覺信息融入文本標記,從而促進將餵入 LLM 骨幹的視覺標記極端壓縮為一個標記。LLaVA-Mini 是一個統一的大型多模型,可以有效地支持對圖像、高分辨率圖像和視頻的理解。在 11 個基於圖像和 7 個基於視頻的基準測試中進行的實驗表明,LLaVA-Mini 在只使用 1 個視覺標記而非 576 個的情況下勝過 LLaVA-v1.5。效率分析顯示,LLaVA-Mini 可以將 FLOPs 減少 77%,在 40 毫秒內提供低延遲響應,在具有 24GB 記憶體的 GPU 硬件上處理超過 10,000 幀的視頻。
English
The advent of real-time large multimodal models (LMMs) like GPT-4o has
sparked considerable interest in efficient LMMs. LMM frameworks typically
encode visual inputs into vision tokens (continuous representations) and
integrate them and textual instructions into the context of large language
models (LLMs), where large-scale parameters and numerous context tokens
(predominantly vision tokens) result in substantial computational overhead.
Previous efforts towards efficient LMMs always focus on replacing the LLM
backbone with smaller models, while neglecting the crucial issue of token
quantity. In this paper, we introduce LLaVA-Mini, an efficient LMM with minimal
vision tokens. To achieve a high compression ratio of vision tokens while
preserving visual information, we first analyze how LMMs understand vision
tokens and find that most vision tokens only play a crucial role in the early
layers of LLM backbone, where they mainly fuse visual information into text
tokens. Building on this finding, LLaVA-Mini introduces modality pre-fusion to
fuse visual information into text tokens in advance, thereby facilitating the
extreme compression of vision tokens fed to LLM backbone into one token.
LLaVA-Mini is a unified large multimodal model that can support the
understanding of images, high-resolution images, and videos in an efficient
manner. Experiments across 11 image-based and 7 video-based benchmarks
demonstrate that LLaVA-Mini outperforms LLaVA-v1.5 with just 1 vision token
instead of 576. Efficiency analyses reveal that LLaVA-Mini can reduce FLOPs by
77%, deliver low-latency responses within 40 milliseconds, and process over
10,000 frames of video on the GPU hardware with 24GB of memory.Summary
AI-Generated Summary