ChatPaper.aiChatPaper

KV 位移注意力增强语言建模。

KV Shifting Attention Enhances Language Modeling

November 29, 2024
作者: Mingyu Xu, Wei Cheng, Bingning Wang, Weipeng Chen
cs.AI

摘要

目前的大型语言模型主要基于仅解码结构的Transformer,具有很强的上下文学习(ICL)能力。人们普遍认为,其ICL能力的重要基础是感知头机制,这需要至少两层的注意力。为了更有效地实现模型感知的能力,我们重新审视了感知头机制,并提出了KV转移注意力。我们在理论上证明了KV转移注意力减少了模型对感知头机制深度和宽度的要求。我们的实验结果表明,KV转移注意力有助于学习感知头和语言建模,从玩具模型到具有超过10 B参数的预训练模型,导致更好的性能或更快的收敛。
English
The current large language models are mainly based on decode-only structure transformers, which have great in-context learning (ICL) capabilities. It is generally believed that the important foundation of its ICL capability is the induction heads mechanism, which requires at least two layers attention. In order to more efficiently implement the ability of the model's induction, we revisit the induction heads mechanism and proposed a KV shifting attention. We theoretically prove that the KV shifting attention reducing the model's requirements for the depth and width of the induction heads mechanism. Our experimental results demonstrate that KV shifting attention is beneficial to learning induction heads and language modeling, which lead to better performance or faster convergence from toy models to the pre-training models with more than 10 B parameters.

Summary

AI-Generated Summary

PDF96December 6, 2024