多模态Mamba:通过二次到线性蒸馏实现的仅解码器多模态状态空间模型
Multimodal Mamba: Decoder-only Multimodal State Space Model via Quadratic to Linear Distillation
February 18, 2025
作者: Bencheng Liao, Hongyuan Tao, Qian Zhang, Tianheng Cheng, Yingyue Li, Haoran Yin, Wenyu Liu, Xinggang Wang
cs.AI
摘要
近期,多模态大语言模型(MLLMs)虽取得了显著成效,却因二次方计算复杂度、不断增长的键值缓存需求及对独立视觉编码器的依赖,面临部署难题。为此,我们提出了mmMamba框架,旨在通过适度学术计算资源,从现有MLLMs逐步蒸馏,开发出线性复杂度的原生多模态状态空间模型。该方法无需预训练的基于RNN的LLM或视觉编码器,即可将已训练的仅解码器MLLMs直接转换为线性复杂度架构。我们提出了一种从训练好的Transformer中雕刻Mamba的种子策略及三阶段蒸馏方案,能有效传递Transformer知识至Mamba,同时保留多模态能力。此外,我们的方法支持灵活的混合架构,结合Transformer与Mamba层,实现可定制的效率-性能权衡。基于Transformer的仅解码器HoVLE蒸馏而来,mmMamba-linear在性能上与现有线性和二次方复杂度视觉语言模型(VLMs)相当,而mmMamba-hybrid则大幅提升性能,接近HoVLE的水平。在处理103K令牌时,mmMamba-linear相比HoVLE实现了20.6倍的加速和75.8%的GPU内存节省,mmMamba-hybrid则达到了13.5倍的加速和60.2%的内存节约。代码与模型已发布于https://github.com/hustvl/mmMamba。
English
Recent Multimodal Large Language Models (MLLMs) have achieved remarkable
performance but face deployment challenges due to their quadratic computational
complexity, growing Key-Value cache requirements, and reliance on separate
vision encoders. We propose mmMamba, a framework for developing
linear-complexity native multimodal state space models through progressive
distillation from existing MLLMs using moderate academic computational
resources. Our approach enables the direct conversion of trained decoder-only
MLLMs to linear-complexity architectures without requiring pre-trained
RNN-based LLM or vision encoders. We propose an seeding strategy to carve Mamba
from trained Transformer and a three-stage distillation recipe, which can
effectively transfer the knowledge from Transformer to Mamba while preserving
multimodal capabilities. Our method also supports flexible hybrid architectures
that combine Transformer and Mamba layers for customizable
efficiency-performance trade-offs. Distilled from the Transformer-based
decoder-only HoVLE, mmMamba-linear achieves competitive performance against
existing linear and quadratic-complexity VLMs, while mmMamba-hybrid further
improves performance significantly, approaching HoVLE's capabilities. At 103K
tokens, mmMamba-linear demonstrates 20.6times speedup and 75.8% GPU memory
reduction compared to HoVLE, while mmMamba-hybrid achieves 13.5times speedup
and 60.2% memory savings. Code and models are released at
https://github.com/hustvl/mmMambaSummary
AI-Generated Summary