1比特LLM時代:所有大型語言模型都在1.58比特。

The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits

February 27, 2024
作者: Shuming Ma, Hongyu Wang, Lingxiao Ma, Lei Wang, Wenhui Wang, Shaohan Huang, Li Dong, Ruiping Wang, Jilong Xue, Furu Wei
cs.AI

摘要

最近的研究,如 BitNet,正在為一位元大型語言模型(LLM)的新時代鋪平道路。在這項工作中,我們介紹了一種1位元LLM變體,即BitNet b1.58,其中LLM的每個單個參數(或權重)均為三元組{-1, 0, 1}。它與相同模型大小和訓練標記的全精度(即FP16或BF16)Transformer LLM在困惑度和最終任務表現方面相匹配,同時在延遲、內存、吞吐量和能源消耗方面顯著更具成本效益。更重要的是,1.58位元LLM定義了一個新的縮放定律和訓練新一代既高性能又具成本效益的LLM的方法。此外,它還實現了一種新的計算範式,並為設計專為1位元LLM優化的特定硬體敞開了大門。
English
Recent research, such as BitNet, is paving the way for a new era of 1-bit Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant, namely BitNet b1.58, in which every single parameter (or weight) of the LLM is ternary {-1, 0, 1}. It matches the full-precision (i.e., FP16 or BF16) Transformer LLM with the same model size and training tokens in terms of both perplexity and end-task performance, while being significantly more cost-effective in terms of latency, memory, throughput, and energy consumption. More profoundly, the 1.58-bit LLM defines a new scaling law and recipe for training new generations of LLMs that are both high-performance and cost-effective. Furthermore, it enables a new computation paradigm and opens the door for designing specific hardware optimized for 1-bit LLMs.

Summary

AI-Generated Summary

PDF612142December 15, 2024