ChatPaper.aiChatPaper

LongRoPE2:近乎无损的大语言模型上下文窗口扩展

LongRoPE2: Near-Lossless LLM Context Window Scaling

February 27, 2025
作者: Ning Shang, Li Lyna Zhang, Siyuan Wang, Gaokai Zhang, Gilsinia Lopez, Fan Yang, Weizhu Chen, Mao Yang
cs.AI

摘要

LongRoPE2 是一种创新方法,旨在将预训练大型语言模型(LLMs)的有效上下文窗口扩展至目标长度,同时保持其在原有较短上下文窗口上的性能。这一目标通过三项主要贡献实现:(1)提出一个假设,即现有方法中观察到的持续分布外(OOD)问题源于高维RoPE训练不足;(2)开发了一种高效的RoPE重缩放算法,采用“针驱动”困惑度引导的进化搜索,以解决训练不足的问题;(3)引入混合上下文窗口训练策略,通过微调模型权重,使其适应长上下文序列的重缩放RoPE,同时利用原始RoPE保持短上下文性能。在LLaMA3-8B和Phi3-mini-3.8B模型上进行的广泛基准测试验证了该假设,并证明了LongRoPE2的有效性。值得注意的是,LongRoPE2仅使用10B个token,就将LLaMA3-8B的有效上下文长度扩展至128K,同时保留了超过98.5%的短上下文性能,这一数据量仅为Meta方法的1/80,而后者未能达到目标有效上下文长度。代码将在https://github.com/microsoft/LongRoPE 提供。
English
LongRoPE2 is a novel approach that extends the effective context window of pre-trained large language models (LLMs) to the target length, while preserving the performance on the original shorter context window. This is achieved by three contributions: (1) a hypothesis that insufficient training in higher RoPE dimensions contributes to the persistent out-of-distribution (OOD) issues observed in existing methods; (2) an effective RoPE rescaling algorithm that adopts evolutionary search guided by "needle-driven" perplexity to address the insufficient training problem; (3) a mixed context window training approach that fine-tunes model weights to adopt rescaled RoPE for long-context sequences while preserving the short-context performance with the original RoPE. Extensive experiments on LLaMA3-8B and Phi3-mini-3.8B across various benchmarks validate the hypothesis and demonstrate the effectiveness of LongRoPE2. Remarkably, LongRoPE2 extends LLaMA3-8B to achieve a 128K effective context length while retaining over 98.5% of short-context performance, using only 10B tokens -- 80x fewer than Meta's approach, which fails to reach the target effective context length. Code will be available at https://github.com/microsoft/LongRoPE.

Summary

AI-Generated Summary

PDF312February 28, 2025