1比特LLM时代:所有大型语言模型均为1.58比特。
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
February 27, 2024
作者: Shuming Ma, Hongyu Wang, Lingxiao Ma, Lei Wang, Wenhui Wang, Shaohan Huang, Li Dong, Ruiping Wang, Jilong Xue, Furu Wei
cs.AI
摘要
最近的研究,比如BitNet,正在为一比特大型语言模型(LLMs)的新时代铺平道路。在这项工作中,我们介绍了一种1比特LLM变体,即BitNet b1.58,其中LLM的每个单个参数(或权重)都是三值{-1, 0, 1}。它与具有相同模型大小和训练标记的全精度(即FP16或BF16)Transformer LLM在困惑度和最终任务性能方面相匹配,同时在延迟、内存、吞吐量和能耗方面显着更具成本效益。更深远的是,1.58比特LLM定义了一种新的缩放规律和训练新一代既高性能又具有成本效益的LLMs的方法。此外,它还实现了一种新的计算范式,并为设计专门针对1比特LLMs优化的硬件打开了大门。
English
Recent research, such as BitNet, is paving the way for a new era of 1-bit
Large Language Models (LLMs). In this work, we introduce a 1-bit LLM variant,
namely BitNet b1.58, in which every single parameter (or weight) of the LLM is
ternary {-1, 0, 1}. It matches the full-precision (i.e., FP16 or BF16)
Transformer LLM with the same model size and training tokens in terms of both
perplexity and end-task performance, while being significantly more
cost-effective in terms of latency, memory, throughput, and energy consumption.
More profoundly, the 1.58-bit LLM defines a new scaling law and recipe for
training new generations of LLMs that are both high-performance and
cost-effective. Furthermore, it enables a new computation paradigm and opens
the door for designing specific hardware optimized for 1-bit LLMs.Summary
AI-Generated Summary