MixLLM:LLM 量化與全局混合精度之間的輸出特徵和高效系統設計
MixLLM: LLM Quantization with Global Mixed-precision between Output-features and Highly-efficient System Design
December 19, 2024
作者: Zhen Zheng, Xiaonan Song, Chuanjie Liu
cs.AI
摘要
量化已成為壓縮LLM至較小尺寸的最有效方法之一。然而,現有的量化解決方案仍存在顯著的準確度下降或系統效率不佳的限制。本文對一般量化原則對準確度、記憶體消耗和系統效率三角關係的影響進行了全面分析。我們提出了MixLLM,探索基於洞察力在輸出特徵之間的混合精度量化的新優化空間,因為不同的輸出特徵在模型中的重要性不同。MixLLM在全局視角中識別具有高重要性的輸出特徵,有效地將更大的位寬分配給最需要的輸出特徵,以實現良好的準確度和低記憶體消耗。我們提出了算法-系統共同設計的量化配置的最佳點,以實現高準確度和系統效率。為應對系統挑戰,我們設計了兩步驟的反量化,以輕鬆利用int8 Tensor Core和快速數據類型轉換,從而顯著減少反量化開銷,並提出了軟體管道以最佳方式重疊記憶體訪問、反量化和矩陣乘法。大量實驗表明,僅需多出10%的位元數,PPL增加量可從SOTA的約0.5降至Llama 3.1 70B的0.2左右,而MMLU-Pro平均改進了0.93,超越了三個流行模型的SOTA。除了具有卓越的準確度外,MixLLM還實現了最先進的系統效率。
English
Quantization has become one of the most effective methodologies to compress
LLMs into smaller size. However, the existing quantization solutions still show
limitations of either non-negligible accuracy drop or system inefficiency. In
this paper, we make a comprehensive analysis of the general quantization
principles on their effect to the triangle of accuracy, memory consumption and
system efficiency. We propose MixLLM that explores the new optimization space
of mixed-precision quantization between output features based on the insight
that different output features matter differently in the model. MixLLM
identifies the output features with high salience in the global view rather
than within each single layer, effectively assigning the larger bit-width to
output features that need it most to achieve good accuracy with low memory
consumption. We present the sweet spot of quantization configuration of
algorithm-system co-design that leads to high accuracy and system efficiency.
To address the system challenge, we design the two-step dequantization to make
use of the int8 Tensor Core easily and fast data type conversion to reduce
dequantization overhead significantly, and present the software pipeline to
overlap the memory access, dequantization and the MatMul to the best. Extensive
experiments show that with only 10% more bits, the PPL increasement can be
reduced from about 0.5 in SOTA to within 0.2 for Llama 3.1 70B, while on
average MMLU-Pro improves by 0.93 over the SOTA of three popular models. In
addition to its superior accuracy, MixLLM also achieves state-of-the-art system
efficiency.Summary
AI-Generated Summary