ChatPaper.aiChatPaper

HeadInfer:通过逐头卸载实现内存高效的大型语言模型推理

HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading

February 18, 2025
作者: Cheng Luo, Zefan Cai, Hanshi Sun, Jinqi Xiao, Bo Yuan, Wen Xiao, Junjie Hu, Jiawei Zhao, Beidi Chen, Anima Anandkumar
cs.AI

摘要

基于Transformer的大型语言模型(LLMs)在长上下文生成任务中展现了卓越的性能。随着上下文长度的扩展,LLMs在推理过程中内存占用的重心显著转移到了键值缓存(KV缓存)上。本文提出HEADINFER方法,它将KV缓存卸载至CPU内存,同时避免了在GPU上完整存储任何Transformer层的KV缓存。HEADINFER采用细粒度的、按注意力头卸载的策略,仅在GPU上保留部分关键注意力头的KV缓存,并动态计算注意力输出。通过屋顶线分析,我们证明HEADINFER在保持计算效率的同时,显著降低了内存占用。我们在Llama-3-8B模型上对HEADINFER进行了评估,处理100万token的序列时,将KV缓存的GPU内存占用从128GB降至1GB,总GPU内存使用量从207GB减少到17GB,相比BF16基线推理实现了92%的降幅。尤为突出的是,HEADINFER使得在单块24GB显存的消费级GPU(如NVIDIA RTX 4090)上,无需近似方法即可进行8B模型的400万token推理。
English
Transformer-based large language models (LLMs) demonstrate impressive performance in long context generation. Extending the context length has disproportionately shifted the memory footprint of LLMs during inference to the key-value cache (KV cache). In this paper, we propose HEADINFER, which offloads the KV cache to CPU RAM while avoiding the need to fully store the KV cache for any transformer layer on the GPU. HEADINFER employs a fine-grained, head-wise offloading strategy, maintaining only selective attention heads KV cache on the GPU while computing attention output dynamically. Through roofline analysis, we demonstrate that HEADINFER maintains computational efficiency while significantly reducing memory footprint. We evaluate HEADINFER on the Llama-3-8B model with a 1-million-token sequence, reducing the GPU memory footprint of the KV cache from 128 GB to 1 GB and the total GPU memory usage from 207 GB to 17 GB, achieving a 92% reduction compared to BF16 baseline inference. Notably, HEADINFER enables 4-million-token inference with an 8B model on a single consumer GPU with 24GB memory (e.g., NVIDIA RTX 4090) without approximation methods.

Summary

AI-Generated Summary

PDF112February 19, 2025