ChatPaper.aiChatPaper

BitNet b1.58 2B4T 技術報告

BitNet b1.58 2B4T Technical Report

April 16, 2025
作者: Shuming Ma, Hongyu Wang, Shaohan Huang, Xingxing Zhang, Ying Hu, Ting Song, Yan Xia, Furu Wei
cs.AI

摘要

我們推出BitNet b1.58 2B4T,這是首個開源的、原生1位元大型語言模型(LLM),參數規模達20億。該模型在4萬億個token的語料庫上進行訓練,並在多個基準測試中進行了嚴格評估,涵蓋語言理解、數學推理、編碼能力及對話技巧。我們的結果顯示,BitNet b1.58 2B4T在性能上與同規模的領先開源全精度LLM相當,同時在計算效率方面具有顯著優勢,包括大幅降低的記憶體佔用、能耗及解碼延遲。為促進進一步研究與應用,我們通過Hugging Face發布了模型權重,並提供了適用於GPU和CPU架構的開源推理實現。
English
We introduce BitNet b1.58 2B4T, the first open-source, native 1-bit Large Language Model (LLM) at the 2-billion parameter scale. Trained on a corpus of 4 trillion tokens, the model has been rigorously evaluated across benchmarks covering language understanding, mathematical reasoning, coding proficiency, and conversational ability. Our results demonstrate that BitNet b1.58 2B4T achieves performance on par with leading open-weight, full-precision LLMs of similar size, while offering significant advantages in computational efficiency, including substantially reduced memory footprint, energy consumption, and decoding latency. To facilitate further research and adoption, the model weights are released via Hugging Face along with open-source inference implementations for both GPU and CPU architectures.

Summary

AI-Generated Summary

PDF512April 17, 2025