ChatPaper.aiChatPaper

加法是能夠提高語言模型能效的關鍵。

Addition is All You Need for Energy-efficient Language Models

October 1, 2024
作者: Hongyin Luo, Wei Sun
cs.AI

摘要

大型神經網絡在浮點張量乘法上花費了大部分計算。在這項工作中,我們發現浮點乘法器可以以高精度逼近一個整數加法器。我們提出了線性複雜度乘法 L-Mul 演算法,該演算法用整數加法運算逼近浮點數乘法。這種新演算法的計算資源成本顯著低於8位浮點乘法,但實現了更高的精度。與8位浮點乘法相比,該方法實現了更高的精度,但消耗的位元級計算明顯較少。由於浮點數相乘需要比整數加法運算更多的能量,將 L-Mul 運算應用於張量處理硬體可能通過逐元素浮點張量乘法減少95%的能量成本,以及減少80%的點積能量成本。我們計算了 L-Mul 的理論誤差期望值,並在廣泛的文本、視覺和符號任務上評估了該演算法,包括自然語言理解、結構推理、數學和常識問答。我們的數值分析實驗與理論誤差估計一致,表明具有4位尾數的 L-Mul 實現了與 float8_e4m3 乘法相當的精度,而具有3位尾數的 L-Mul 優於 float8_e5m2。對流行基準測試的評估結果顯示,將 L-Mul 直接應用於注意機制幾乎沒有損失。我們進一步展示,在變壓器模型中將所有浮點乘法替換為具有3位尾數的 L-Mul,在微調和推理中實現了與使用 float8_e4m3 作為累加精度相等的精度。
English
Large neural networks spend most computation on floating point tensor multiplications. In this work, we find that a floating point multiplier can be approximated by one integer adder with high precision. We propose the linear-complexity multiplication L-Mul algorithm that approximates floating point number multiplication with integer addition operations. The new algorithm costs significantly less computation resource than 8-bit floating point multiplication but achieves higher precision. Compared to 8-bit floating point multiplications, the proposed method achieves higher precision but consumes significantly less bit-level computation. Since multiplying floating point numbers requires substantially higher energy compared to integer addition operations, applying the L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by element-wise floating point tensor multiplications and 80% energy cost of dot products. We calculated the theoretical error expectation of L-Mul, and evaluated the algorithm on a wide range of textual, visual, and symbolic tasks, including natural language understanding, structural reasoning, mathematics, and commonsense question answering. Our numerical analysis experiments agree with the theoretical error estimation, which indicates that L-Mul with 4-bit mantissa achieves comparable precision as float8_e4m3 multiplications, and L-Mul with 3-bit mantissa outperforms float8_e5m2. Evaluation results on popular benchmarks show that directly applying L-Mul to the attention mechanism is almost lossless. We further show that replacing all floating point multiplications with 3-bit mantissa L-Mul in a transformer model achieves equivalent precision as using float8_e4m3 as accumulation precision in both fine-tuning and inference.

Summary

AI-Generated Summary

PDF15117November 16, 2024