SageAttention2 技術報告:精確的 4 位元注意力機制 用於即插即用推論加速
SageAttention2 Technical Report: Accurate 4 Bit Attention for Plug-and-play Inference Acceleration
November 17, 2024
作者: Jintao Zhang, Haofeng Huang, Pengle Zhang, Jia Wei, Jun Zhu, Jianfei Chen
cs.AI
摘要
儘管量化在線性層上被廣泛應用,但其在加速注意力過程方面的應用仍然有限。SageAttention利用8位元矩陣乘法、16位元矩陣乘法與16位元累加器,以及精度增強方法,實現了比FlashAttention2更準確且2倍加速的核心。為了進一步提高注意力計算的效率並保持精度,我們提出了SageAttention2,該方法採用明顯更快速的4位元矩陣乘法(Matmul)以及額外的精度增強技術。首先,我們建議將矩陣(Q、K)以warp級別的粒度量化為INT4,並將矩陣(widetilde P、V)量化為FP8。其次,我們提出了一種平滑Q和V的方法,增強了使用INT4 QK和FP8 PV的注意力的準確性。第三,我們分析了不同時間步長和層之間的量化準確性,然後提出了一種自適應量化方法,以確保各種模型的端到端指標。SageAttention2的每秒操作次數(OPS)在RTX4090上超過FlashAttention2和xformers約3倍和5倍。全面的實驗證實了我們的方法在各種模型上,包括大型語言處理、圖像生成和視頻生成模型上,幾乎沒有造成端到端指標損失。代碼可在https://github.com/thu-ml/SageAttention找到。
English
Although quantization for linear layers has been widely used, its application
to accelerate the attention process remains limited. SageAttention utilizes
8-bit matrix multiplication, 16-bit matrix multiplication with 16-bit
accumulator, and precision-enhancing methods, implementing an accurate and 2x
speedup kernel compared to FlashAttention2. To further enhance the efficiency
of attention computation while maintaining precision, we propose
SageAttention2, which utilizes significantly faster 4-bit matrix multiplication
(Matmul) alongside additional precision-enhancing techniques. First, we propose
to quantize matrixes (Q, K) to INT4 in a warp-level granularity and quantize
matrixes (widetilde P, V) to FP8. Second, we propose a method to smooth Q
and V, enhancing the accuracy of attention with INT4 QK and FP8 PV.
Third, we analyze the quantization accuracy across timesteps and layers, then
propose an adaptive quantization method to ensure the end-to-end metrics over
various models. The operations per second (OPS) of SageAttention2 surpass
FlashAttention2 and xformers by about 3x and 5x on RTX4090, respectively.
Comprehensive experiments confirm that our approach incurs negligible
end-to-end metrics loss across diverse models, including those for large
language processing, image generation, and video generation. The codes are
available at https://github.com/thu-ml/SageAttention.Summary
AI-Generated Summary