Quamba2:面向选择性状态空间模型的鲁棒可扩展训练后量化框架
Quamba2: A Robust and Scalable Post-training Quantization Framework for Selective State Space Models
March 28, 2025
作者: Hung-Yueh Chiang, Chi-Chih Chang, Natalia Frumkin, Kai-Chiang Wu, Mohamed S. Abdelfattah, Diana Marculescu
cs.AI
摘要
状态空间模型(SSMs)因其稳定的内存占用和卓越性能,正逐渐成为Transformer的有力替代方案。然而,在云服务或资源受限设备上扩展SSMs面临存储需求和计算能力的挑战。为此,采用低位宽数据格式对SSMs进行量化,不仅能缩减模型规模,还能充分利用硬件加速优势。鉴于SSMs易受量化误差影响,近期研究致力于在不牺牲性能的前提下,针对特定模型或位宽进行优化。然而,不同场景需适配不同的位宽配置,如W4A8用于提升大批量解码速度,而W4A16则优化单用户短提示应用中的生成速度。为此,我们推出Quamba2,兼容W8A8、W4A8和W4A16,适用于Mamba1和Mamba2架构,满足SSM多样化部署需求。基于SSM的通道顺序保持和激活持续性特性,我们提出一种离线方法,通过对输入x进行排序和聚类,以8位量化线性递归的输入,并结合针对输入依赖参数B和C的逐状态组量化。为确保SSM输出的计算不变性,我们依据聚类序列离线重排权重。实验表明,Quamba2-8B在多项SSM量化方法中表现优异,预填充和生成阶段分别实现1.3倍和3倍加速,同时内存减少4倍,平均精度仅下降1.6%。MMLU评估验证了框架的通用性和鲁棒性。代码及量化模型将发布于:https://github.com/enyac-group/Quamba。
English
State Space Models (SSMs) are emerging as a compelling alternative to
Transformers because of their consistent memory usage and high performance.
Despite this, scaling up SSMs on cloud services or limited-resource devices is
challenging due to their storage requirements and computational power. To
overcome this, quantizing SSMs with low bit-width data formats can reduce model
size and benefit from hardware acceleration. As SSMs are prone to
quantization-induced errors, recent efforts have focused on optimizing a
particular model or bit-width for efficiency without sacrificing performance.
However, distinct bit-width configurations are essential for different
scenarios, like W4A8 for boosting large-batch decoding speed, and W4A16 for
enhancing generation speed in short prompt applications for a single user. To
this end, we present Quamba2, compatible with W8A8, W4A8, and W4A16 for both
Mamba1 and Mamba2 backbones, addressing the growing demand for SSM deployment
on various platforms. Based on the channel order preserving and activation
persistence of SSMs, we propose an offline approach to quantize inputs of a
linear recurrence in 8-bit by sorting and clustering for input x, combined
with a per-state-group quantization for input-dependent parameters B and C.
To ensure compute-invariance in the SSM output, we rearrange weights offline
according to the clustering sequence. The experiments show that Quamba2-8B
outperforms several state-of-the-art SSM quantization methods and delivers
1.3times and 3times speed-ups in the pre-filling and generation stages,
respectively, while offering 4times memory reduction with only a 1.6%
average accuracy drop. The evaluation on MMLU shows the generalizability and
robustness of our framework. The code and quantized models will be released at:
https://github.com/enyac-group/Quamba.Summary
AI-Generated Summary