字节潜变换器:补丁比记号更好地扩展规模

Byte Latent Transformer: Patches Scale Better Than Tokens

December 13, 2024
作者: Artidoro Pagnoni, Ram Pasunuru, Pedro Rodriguez, John Nguyen, Benjamin Muller, Margaret Li, Chunting Zhou, Lili Yu, Jason Weston, Luke Zettlemoyer, Gargi Ghosh, Mike Lewis, Ari Holtzman, Srinivasan Iyer
cs.AI

摘要

我们介绍了Byte Latent Transformer(BLT),这是一种新的字节级LLM架构,首次在规模上与基于标记化的LLM性能相匹配,并且在推理效率和鲁棒性方面有显著改进。BLT将字节编码为动态大小的补丁,这些补丁作为计算的主要单位。补丁根据下一个字节的熵进行分段,根据数据复杂性增加需求,分配更多的计算和模型容量。我们展示了字节级模型的第一个FLOP受控缩放研究,参数规模达到8B,训练字节为4T。我们的结果表明,在没有固定词汇表的情况下扩展在原始字节上训练的模型的可行性。由于在数据可预测时动态选择长补丁,训练和推理效率均得到改善,并在推理和长尾泛化方面取得了定性改进。总体而言,对于固定推理成本,BLT显示出比基于标记化的模型更好的扩展性,同时增加补丁和模型大小。
English
We introduce the Byte Latent Transformer (BLT), a new byte-level LLM architecture that, for the first time, matches tokenization-based LLM performance at scale with significant improvements in inference efficiency and robustness. BLT encodes bytes into dynamically sized patches, which serve as the primary units of computation. Patches are segmented based on the entropy of the next byte, allocating more compute and model capacity where increased data complexity demands it. We present the first FLOP controlled scaling study of byte-level models up to 8B parameters and 4T training bytes. Our results demonstrate the feasibility of scaling models trained on raw bytes without a fixed vocabulary. Both training and inference efficiency improve due to dynamically selecting long patches when data is predictable, along with qualitative improvements on reasoning and long tail generalization. Overall, for fixed inference costs, BLT shows significantly better scaling than tokenization-based models, by simultaneously growing both patch and model size.

Summary

AI-Generated Summary

PDF896December 17, 2024