KV 位移注意力增強語言建模

KV Shifting Attention Enhances Language Modeling

November 29, 2024
作者: Mingyu Xu, Wei Cheng, Bingning Wang, Weipeng Chen
cs.AI

摘要

目前的大型語言模型主要基於僅解碼結構的Transformer,具有強大的上下文學習(ICL)能力。一般認為其ICL能力的重要基礎是感應頭機制,該機制至少需要兩層注意力。為了更有效地實現模型感應的能力,我們重新審視了感應頭機制並提出了KV位移注意力。我們在理論上證明了KV位移注意力降低了模型對感應頭機制深度和寬度的要求。我們的實驗結果表明,KV位移注意力有助於學習感應頭和語言建模,從玩具模型到具有超過10 B參數的預訓練模型,均可實現更好的性能或更快的收斂。
English
The current large language models are mainly based on decode-only structure transformers, which have great in-context learning (ICL) capabilities. It is generally believed that the important foundation of its ICL capability is the induction heads mechanism, which requires at least two layers attention. In order to more efficiently implement the ability of the model's induction, we revisit the induction heads mechanism and proposed a KV shifting attention. We theoretically prove that the KV shifting attention reducing the model's requirements for the depth and width of the induction heads mechanism. Our experimental results demonstrate that KV shifting attention is beneficial to learning induction heads and language modeling, which lead to better performance or faster convergence from toy models to the pre-training models with more than 10 B parameters.

Summary

AI-Generated Summary

PDF86December 6, 2024