BIMBA:面向长距离视频问答的选择性扫描压缩技术
BIMBA: Selective-Scan Compression for Long-Range Video Question Answering
March 12, 2025
作者: Md Mohaiminul Islam, Tushar Nagarajan, Huiyu Wang, Gedas Bertasius, Lorenzo Torresani
cs.AI
摘要
长视频问答(VQA)面临的核心挑战在于从大量冗余帧中提取相关信息并建模长程依赖关系。自注意力机制为序列建模提供了通用解决方案,但在处理长视频中庞大的时空标记时,其计算成本令人望而却步。现有方法多依赖压缩策略以降低计算开销,如通过稀疏帧采样减少输入长度,或通过时空池化压缩传递给大语言模型(LLM)的输出序列。然而,这些简单方法过度表征冗余信息,常遗漏显著事件或快速变化的时空模式。本研究提出BIMBA,一种高效的状态空间模型,专为处理长视频设计。该模型利用选择性扫描算法,学习从高维视频中有效筛选关键信息,并将其转化为精简的标记序列,以便LLM高效处理。大量实验表明,BIMBA在多个长视频VQA基准测试中,包括PerceptionTest、NExT-QA、EgoSchema、VNBench、LongVideoBench及Video-MME,均达到了最先进的准确率。代码与模型已公开于https://sites.google.com/view/bimba-mllm。
English
Video Question Answering (VQA) in long videos poses the key challenge of
extracting relevant information and modeling long-range dependencies from many
redundant frames. The self-attention mechanism provides a general solution for
sequence modeling, but it has a prohibitive cost when applied to a massive
number of spatiotemporal tokens in long videos. Most prior methods rely on
compression strategies to lower the computational cost, such as reducing the
input length via sparse frame sampling or compressing the output sequence
passed to the large language model (LLM) via space-time pooling. However, these
naive approaches over-represent redundant information and often miss salient
events or fast-occurring space-time patterns. In this work, we introduce BIMBA,
an efficient state-space model to handle long-form videos. Our model leverages
the selective scan algorithm to learn to effectively select critical
information from high-dimensional video and transform it into a reduced token
sequence for efficient LLM processing. Extensive experiments demonstrate that
BIMBA achieves state-of-the-art accuracy on multiple long-form VQA benchmarks,
including PerceptionTest, NExT-QA, EgoSchema, VNBench, LongVideoBench, and
Video-MME. Code, and models are publicly available at
https://sites.google.com/view/bimba-mllm.Summary
AI-Generated Summary