浮點量化訓練的比例定律
Scaling Laws for Floating Point Quantization Training
January 5, 2025
作者: Xingwu Sun, Shuaipeng Li, Ruobing Xie, Weidong Han, Kan Wu, Zhen Yang, Yixing Li, An Wang, Shuai Li, Jinbao Xue, Yu Cheng, Yangyu Tao, Zhanhui Kang, Chengzhong Xu, Di Wang, Jie Jiang
cs.AI
摘要
低精度訓練被認為是降低訓練和下游推理成本的有效策略。先前對精度的縮放定律主要集中在整數量化上,較少關注浮點數量化的要素,因此無法很好地適應在這種情況下的LLM損失。相比之下,雖然浮點數量化訓練在生產中更常見,但對此的研究相對較為淺薄。本文深入探討了浮點數量化目標、指數位、尾數位以及浮點數量化訓練中縮放因子的計算粒度對LLM模型性能的影響。我們提出了一個準確的浮點數量化統一縮放定律,同時為社區提供了寶貴建議:(1) 指數位對模型性能的貢獻略高於尾數位。我們為不同位數提供了最佳的指數-尾數位比例,可供硬體製造商未來參考;(2) 我們發現在低精度LLM訓練中形成了臨界數據大小。訓練數據過多超過臨界數據大小將反過來降低LLM性能;(3) 最佳浮點數量化精度與計算能力成正比,但在廣泛的計算能力範圍內,我們估計最佳的性價比精度介於4-8位之間。
English
Low-precision training is considered an effective strategy for reducing both
training and downstream inference costs. Previous scaling laws for precision
mainly focus on integer quantization, which pay less attention to the
constituents in floating-point quantization and thus cannot well fit the LLM
losses in this scenario. In contrast, while floating-point quantization
training is more commonly implemented in production, the research on it has
been relatively superficial. In this paper, we thoroughly explore the effects
of floating-point quantization targets, exponent bits, mantissa bits, and the
calculation granularity of the scaling factor in floating-point quantization
training performance of LLM models. While presenting an accurate floating-point
quantization unified scaling law, we also provide valuable suggestions for the
community: (1) Exponent bits contribute slightly more to the model performance
than mantissa bits. We provide the optimal exponent-mantissa bit ratio for
different bit numbers, which is available for future reference by hardware
manufacturers; (2) We discover the formation of the critical data size in
low-precision LLM training. Too much training data exceeding the critical data
size will inversely bring in degradation of LLM performance; (3) The optimal
floating-point quantization precision is directly proportional to the
computational power, but within a wide computational power range, we estimate
that the best cost-performance precision lies between 4-8 bits.Summary
AI-Generated Summary