ChatPaper.aiChatPaper

FlatQuant:對於LLM量化,平坦度至關重要。

FlatQuant: Flatness Matters for LLM Quantization

October 12, 2024
作者: Yuxuan Sun, Ruikang Liu, Haoli Bai, Han Bao, Kang Zhao, Yuening Li, Jiaxin Hu, Xianzhi Yu, Lu Hou, Chun Yuan, Xin Jiang, Wulong Liu, Jun Yao
cs.AI

摘要

最近,量化已被廣泛應用於壓縮和加速大型語言模型(LLMs)。由於LLMs中存在離群值,將權重和激活值展平以最小化量化誤差與等間距量化點至關重要。先前的研究探索了各種預量化轉換來抑制離群值,例如通道內縮放和Hadamard轉換。然而,我們觀察到這些轉換後的權重和激活值仍可能保持陡峭且分散。在本文中,我們提出了FlatQuant(快速且可學習的仿射轉換),這是一種新的後訓練量化方法,旨在增強權重和激活值的平坦度。我們的方法通過輕量化目標識別針對每個線性層量身定制的最佳仿射變換,經過數小時的校準。為了減少運行時開銷,我們將Kronecker分解應用於轉換矩陣,並將FlatQuant中的所有操作融合為單個核心。大量實驗表明,FlatQuant建立了一個新的最先進的量化基準。例如,在LLaMA-3-70B模型上進行W4A4量化時,其精度僅下降不到1%,超越SpinQuant 7.5%。對於推理延遲,FlatQuant將由預量化轉換引起的減速從QuaRot的0.26倍降低到僅0.07倍,分別帶來預填充的最高2.3倍加速和解碼的最高1.7倍加速。代碼可在以下鏈接找到:https://github.com/ruikangliu/FlatQuant。
English
Recently, quantization has been widely used for the compression and acceleration of large language models~(LLMs). Due to the outliers in LLMs, it is crucial to flatten weights and activations to minimize quantization error with the equally spaced quantization points. Prior research explores various pre-quantization transformations to suppress outliers, such as per-channel scaling and Hadamard transformation. However, we observe that these transformed weights and activations can still remain steep and outspread. In this paper, we propose FlatQuant (Fast and Learnable Affine Transformation), a new post-training quantization approach to enhance flatness of weights and activations. Our approach identifies optimal affine transformations tailored to each linear layer, calibrated in hours via a lightweight objective. To reduce runtime overhead, we apply Kronecker decomposition to the transformation matrices, and fuse all operations in FlatQuant into a single kernel. Extensive experiments show that FlatQuant sets up a new state-of-the-art quantization benchmark. For instance, it achieves less than 1% accuracy drop for W4A4 quantization on the LLaMA-3-70B model, surpassing SpinQuant by 7.5%. For inference latency, FlatQuant reduces the slowdown induced by pre-quantization transformation from 0.26x of QuaRot to merely 0.07x, bringing up to 2.3x speedup for prefill and 1.7x speedup for decoding, respectively. Code is available at: https://github.com/ruikangliu/FlatQuant.

Summary

AI-Generated Summary

PDF152November 16, 2024