HybridNorm:通过混合归一化实现稳定高效的Transformer训练
HybridNorm: Towards Stable and Efficient Transformer Training via Hybrid Normalization
March 6, 2025
作者: Zhijian Zhuo, Yutao Zeng, Ya Wang, Sijun Zhang, Jian Yang, Xiaoqing Li, Xun Zhou, Jinwen Ma
cs.AI
摘要
Transformer架构已成为广泛机器学习任务,尤其是大型语言模型(LLMs)领域的事实标准。尽管其表现卓越,但在训练深层Transformer网络时仍面临挑战,特别是在层归一化的位置选择上。虽然Pre-Norm结构因其更显著的恒等路径而便于训练,但其性能往往不及Post-Norm。本文提出了一种简单而有效的混合归一化策略——HybridNorm,它融合了Pre-Norm与Post-Norm两者的优势。具体而言,HybridNorm在注意力机制中采用QKV归一化,并在每个Transformer模块的前馈网络(FFN)中应用Post-Norm。这一设计不仅稳定了训练过程,还提升了性能,尤其是在LLMs的背景下。在密集与稀疏架构中的全面实验表明,HybridNorm在各项基准测试中均优于Pre-Norm和Post-Norm方法,达到了最先进的成果。这些发现凸显了HybridNorm作为一种更稳定、更有效的技术,在提升深层Transformer模型训练与性能方面的潜力。代码已公开于https://github.com/BryceZhuo/HybridNorm。
English
Transformers have become the de facto architecture for a wide range of
machine learning tasks, particularly in large language models (LLMs). Despite
their remarkable performance, challenges remain in training deep transformer
networks, especially regarding the location of layer normalization. While
Pre-Norm structures facilitate easier training due to their more prominent
identity path, they often yield suboptimal performance compared to Post-Norm.
In this paper, we propose HybridNorm, a straightforward yet
effective hybrid normalization strategy that integrates the advantages of both
Pre-Norm and Post-Norm approaches. Specifically, HybridNorm employs QKV
normalization within the attention mechanism and Post-Norm in the feed-forward
network (FFN) of each transformer block. This design not only stabilizes
training but also enhances performance, particularly in the context of LLMs.
Comprehensive experiments in both dense and sparse architectures show that
HybridNorm consistently outperforms both Pre-Norm and Post-Norm approaches,
achieving state-of-the-art results across various benchmarks. These findings
highlight the potential of HybridNorm as a more stable and effective technique
for improving the training and performance of deep transformer models. %Code
will be made publicly available. Code is available at
https://github.com/BryceZhuo/HybridNorm.Summary
AI-Generated Summary