视频熊猫:面向无编码器视频-语言模型的参数高效对齐

Video-Panda: Parameter-efficient Alignment for Encoder-free Video-Language Models

December 24, 2024
作者: Jinhui Yi, Syed Talal Wasim, Yanan Luo, Muzammal Naseer, Juergen Gall
cs.AI

摘要

我们提出了一种高效的无编码器方法,用于视频-语言理解,在显著减少计算开销的同时实现了竞争性能。当前的视频-语言模型通常依赖于庞大的图像编码器(3亿至11亿参数)或视频编码器(10亿至14亿参数),在处理多帧视频时造成了重大的计算负担。我们的方法引入了一种新颖的时空对齐模块(STAB),可以直接处理视频输入,而无需预先训练的编码器,同时仅使用4500万参数进行视觉处理 - 与传统方法相比至少减少了6.5倍。STAB架构结合了局部时空编码以进行细粒度特征提取,通过学习注意力实现高效的空间下采样,以及分别建模帧级和视频级关系的机制。我们的模型在标准基准上实现了与基于编码器方法相媲美或更优的性能,用于开放式视频问答。细粒度视频问答评估展示了我们模型的有效性,在正确性和时间理解等关键方面优于基于编码器的方法Video-ChatGPT和Video-LLaVA。大量消融研究验证了我们的架构选择,并展示了我们的时空建模方法的有效性,同时实现了比以前方法快3-4倍的处理速度。代码可在https://github.com/jh-yi/Video-Panda获得。
English
We present an efficient encoder-free approach for video-language understanding that achieves competitive performance while significantly reducing computational overhead. Current video-language models typically rely on heavyweight image encoders (300M-1.1B parameters) or video encoders (1B-1.4B parameters), creating a substantial computational burden when processing multi-frame videos. Our method introduces a novel Spatio-Temporal Alignment Block (STAB) that directly processes video inputs without requiring pre-trained encoders while using only 45M parameters for visual processing - at least a 6.5times reduction compared to traditional approaches. The STAB architecture combines Local Spatio-Temporal Encoding for fine-grained feature extraction, efficient spatial downsampling through learned attention and separate mechanisms for modeling frame-level and video-level relationships. Our model achieves comparable or superior performance to encoder-based approaches for open-ended video question answering on standard benchmarks. The fine-grained video question-answering evaluation demonstrates our model's effectiveness, outperforming the encoder-based approaches Video-ChatGPT and Video-LLaVA in key aspects like correctness and temporal understanding. Extensive ablation studies validate our architectural choices and demonstrate the effectiveness of our spatio-temporal modeling approach while achieving 3-4times faster processing speeds than previous methods. Code is available at https://github.com/jh-yi/Video-Panda.

Summary

AI-Generated Summary

PDF162December 26, 2024