ChatPaper.aiChatPaper

目标:通过标记合并和修剪实现多模态LLM的自适应推理

AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning

December 4, 2024
作者: Yiwu Zhong, Zhuoming Liu, Yin Li, Liwei Wang
cs.AI

摘要

大型语言模型(LLMs)已经实现了多模态LLMs的创建,这些模型展现出对视觉数据(如图像和视频)的强大理解能力。然而,这些模型通常依赖于来自视觉编码器的大量视觉标记,导致高计算需求,限制了它们在资源受限环境和长上下文任务中的适用性。在这项工作中,我们提出了一种面向多模态LLMs的无需训练的自适应推理方法,可以适应广泛的效率要求,并最小化性能下降。我们的方法包括a)在LLMs之前基于嵌入相似性进行迭代标记合并,以及b)基于多模态重要性在LLMs层内逐渐修剪标记。通过极简设计,我们的方法可应用于视频和图像LLMs。对多样的视频和图像基准进行的大量实验表明,我们的方法显著减少了计算负载(例如,FLOPs减少了7倍),同时保持了视频和图像LLMs的性能。此外,在类似的计算成本下,我们的方法在长视频理解方面胜过了最先进的方法(例如,在MLVU上+4.6)。此外,我们的深入分析提供了关于标记冗余和LLM层行为的见解,为未来设计高效多模态LLMs的研究提供了指导。我们的代码将在https://github.com/LaVi-Lab/AIM 上提供。
English
Large language models (LLMs) have enabled the creation of multi-modal LLMs that exhibit strong comprehension of visual data such as images and videos. However, these models usually rely on extensive visual tokens from visual encoders, leading to high computational demands, which limits their applicability in resource-constrained environments and for long-context tasks. In this work, we propose a training-free adaptive inference method for multi-modal LLMs that can accommodate a broad range of efficiency requirements with a minimum performance drop. Our method consists of a) iterative token merging based on embedding similarity before LLMs, and b) progressive token pruning within LLM layers based on multi-modal importance. With a minimalist design, our method can be applied to both video and image LLMs. Extensive experiments on diverse video and image benchmarks demonstrate that, our method substantially reduces computation load (e.g., a 7-fold reduction in FLOPs) while preserving the performance of video and image LLMs. Further, under a similar computational cost, our method outperforms the state-of-the-art methods in long video understanding (e.g., +4.6 on MLVU). Additionally, our in-depth analysis provides insights into token redundancy and LLM layer behaviors, offering guidance for future research in designing efficient multi-modal LLMs. Our code will be available at https://github.com/LaVi-Lab/AIM.

Summary

AI-Generated Summary

PDF282December 5, 2024