Mix-LN:通过结合Pre-LN和Post-LN释放更深层次的能力
Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN
December 18, 2024
作者: Pengxiang Li, Lu Yin, Shiwei Liu
cs.AI
摘要
大型语言模型(LLMs)取得了显著的成功,然而最近的研究发现,它们的深层往往贡献较小,可以进行修剪而不影响整体性能。虽然一些人认为这是模型压缩的机会,但我们将其视为训练不足的表现,根源在于广泛使用的预层归一化(Pre-LN)。我们证明了在像GPT和LLaMA这样的模型中常用的Pre-LN会导致深层的梯度范数减小,降低了它们的有效性。相比之下,后层归一化(Post-LN)在深层保留了较大的梯度范数,但在较早的层中存在梯度消失的问题。为了解决这个问题,我们引入了Mix-LN,这是一种新颖的归一化技术,结合了Pre-LN和Post-LN的优势在同一个模型中。Mix-LN将Post-LN应用于较早的层,将Pre-LN应用于深层,确保了各层之间更均匀的梯度。这使得网络的所有部分——无论是浅层还是深层——都能有效地参与训练。通过对从70M到7B不同规模的模型进行广泛实验,我们证明了Mix-LN始终优于Pre-LN和Post-LN,促进了更平衡、更健康的梯度范数在整个网络中的分布,并提高了LLM预训练的整体质量。此外,我们证明了使用Mix-LN预训练的模型在监督微调(SFT)和从人类反馈中进行强化学习(RLHF)时学习效果更好,突显了高质量深层的重要性。通过有效解决当前LLMs中深层的低效问题,Mix-LN释放了它们的潜力,增强了模型的容量而不增加模型大小。我们的代码可在https://github.com/pixeli99/MixLN找到。
English
Large Language Models (LLMs) have achieved remarkable success, yet recent
findings reveal that their deeper layers often contribute minimally and can be
pruned without affecting overall performance. While some view this as an
opportunity for model compression, we identify it as a training shortfall
rooted in the widespread use of Pre-Layer Normalization (Pre-LN). We
demonstrate that Pre-LN, commonly employed in models like GPT and LLaMA, leads
to diminished gradient norms in its deeper layers, reducing their
effectiveness. In contrast, Post-Layer Normalization (Post-LN) preserves larger
gradient norms in deeper layers but suffers from vanishing gradients in earlier
layers. To address this, we introduce Mix-LN, a novel normalization technique
that combines the strengths of Pre-LN and Post-LN within the same model. Mix-LN
applies Post-LN to the earlier layers and Pre-LN to the deeper layers, ensuring
more uniform gradients across layers. This allows all parts of the
network--both shallow and deep layers--to contribute effectively to training.
Extensive experiments with various model sizes from 70M to 7B demonstrate that
Mix-LN consistently outperforms both Pre-LN and Post-LN, promoting more
balanced, healthier gradient norms throughout the network, and enhancing the
overall quality of LLM pre-training. Furthermore, we demonstrate that models
pre-trained with Mix-LN learn better compared to those using Pre-LN or Post-LN
during supervised fine-tuning (SFT) and reinforcement learning from human
feedback (RLHF), highlighting the critical importance of high-quality deep
layers. By effectively addressing the inefficiencies of deep layers in current
LLMs, Mix-LN unlocks their potential, enhancing model capacity without
increasing model size. Our code is available at
https://github.com/pixeli99/MixLN.Summary
AI-Generated Summary