InfiniteHiP:在单个GPU上将语言模型上下文扩展至300万个标记
InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU
February 13, 2025
作者: Heejun Lee, Geon Park, Jaduk Suh, Sung Ju Hwang
cs.AI
摘要
在现代大型语言模型(LLMs)中,处理非常长的上下文长度会带来重大挑战,因为会导致推理速度变慢并增加内存成本。此外,大多数现有的预训练LLMs无法推广到超出其原始训练序列长度之外。为了实现高效和实用的长上下文利用,我们引入了InfiniteHiP,这是一种新颖且实用的LLM推理框架,通过模块化的分层标记修剪算法动态消除不相关的上下文标记以加速处理。我们的方法还允许通过根据LLMs内部注意力模式选择性地应用各种RoPE调整方法来推广到更长的序列。此外,在推理过程中,我们将关键-值缓存转移到主机内存,显著减少了GPU内存压力。因此,InfiniteHiP使单个L40s 48GB GPU能够处理多达300万个标记,比原来大3倍,而不会永久丢失上下文信息。我们的框架在不需要额外训练的情况下,为100万个标记上下文的注意力解码实现了18.95倍的加速。我们在SGLang框架中实现了我们的方法,并通过广泛评估展示了其有效性和实用性。
English
In modern large language models (LLMs), handling very long context lengths
presents significant challenges as it causes slower inference speeds and
increased memory costs. Additionally, most existing pre-trained LLMs fail to
generalize beyond their original training sequence lengths. To enable efficient
and practical long-context utilization, we introduce InfiniteHiP, a novel, and
practical LLM inference framework that accelerates processing by dynamically
eliminating irrelevant context tokens through a modular hierarchical token
pruning algorithm. Our method also allows generalization to longer sequences by
selectively applying various RoPE adjustment methods according to the internal
attention patterns within LLMs. Furthermore, we offload the key-value cache to
host memory during inference, significantly reducing GPU memory pressure. As a
result, InfiniteHiP enables the processing of up to 3 million tokens on a
single L40s 48GB GPU -- 3x larger -- without any permanent loss of context
information. Our framework achieves an 18.95x speedup in attention decoding for
a 1 million token context without requiring additional training. We implement
our method in the SGLang framework and demonstrate its effectiveness and
practicality through extensive evaluations.Summary
AI-Generated Summary