為什麼大型語言模型的有效上下文長度不足?
Why Does the Effective Context Length of LLMs Fall Short?
October 24, 2024
作者: Chenxin An, Jun Zhang, Ming Zhong, Lei Li, Shansan Gong, Yao Luo, Jingjing Xu, Lingpeng Kong
cs.AI
摘要
分散式訓練和高效的注意機制的進步顯著擴大了大型語言模型(LLMs)的上下文窗口大小。然而,最近的研究顯示,開源LLMs的有效上下文長度通常不足,通常不超過其訓練長度的一半。在這項工作中,我們將這一限制歸因於LLMs預訓練和後訓練階段形成的相對位置的左偏頻率分佈,這阻礙了它們有效地收集遠距離信息的能力。為了應對這一挑戰,我們引入了ShifTed Rotray位置嵌入(STRING)。STRING在推理期間將訓練良好的位置移位,以覆蓋原始的無效位置,從而增強其現有的訓練長度內的性能。實驗結果顯示,在無需額外訓練的情況下,STRING顯著提高了最新的大規模模型(如Llama3.1 70B和Qwen2 72B)在流行的長上下文基準RULER和InfiniteBench上的表現超過10個百分點,為開源LLMs建立了新的最先進結果。與商業模型相比,Llama 3.1 70B甚至比GPT-4-128K表現更好,明顯優於Claude 2和Kimi-chat。
English
Advancements in distributed training and efficient attention mechanisms have
significantly expanded the context window sizes of large language models
(LLMs). However, recent work reveals that the effective context lengths of
open-source LLMs often fall short, typically not exceeding half of their
training lengths. In this work, we attribute this limitation to the left-skewed
frequency distribution of relative positions formed in LLMs pretraining and
post-training stages, which impedes their ability to effectively gather distant
information. To address this challenge, we introduce ShifTed Rotray position
embeddING (STRING). STRING shifts well-trained positions to overwrite the
original ineffective positions during inference, enhancing performance within
their existing training lengths. Experimental results show that without
additional training, STRING dramatically improves the performance of the latest
large-scale models, such as Llama3.1 70B and Qwen2 72B, by over 10 points on
popular long-context benchmarks RULER and InfiniteBench, establishing new
state-of-the-art results for open-source LLMs. Compared to commercial models,
Llama 3.1 70B with \method even achieves better performance than GPT-4-128K and
clearly surpasses Claude 2 and Kimi-chat.Summary
AI-Generated Summary