ChatPaper.aiChatPaper

面向长上下文大语言模型的成本最优分组查询注意力机制

Cost-Optimal Grouped-Query Attention for Long-Context LLMs

March 12, 2025
作者: Yingfa Chen, Yutong Wu, Xu Han, Zhiyuan Liu, Maosong Sun
cs.AI

摘要

构建高效且性能卓越的Transformer架构大型语言模型(LLMs)近期成为研究热点,其核心在于最大化模型的语言处理能力,同时最小化训练与部署成本。现有研究主要聚焦于模型性能、参数规模与数据量之间的复杂关系,并探寻训练LLMs的最优计算资源分配方案。然而,这些研究往往忽视了上下文长度及注意力头配置(在分组查询注意力中查询与键值头的数量)对训练与推理的影响。本文中,我们系统性地比较了不同参数规模、上下文长度及注意力头配置的模型在性能、计算成本与内存消耗方面的表现。进而,我们扩展了仅基于参数规模与训练计算的现有缩放方法,以指导在训练与推理阶段构建成本最优的LLMs。我们的定量缩放研究表明,在处理足够长的序列时,拥有较少注意力头的更大模型能够实现更低的损失,同时带来更少的计算与内存开销。这些发现为开发实用的LLMs,尤其是在长上下文处理场景中,提供了宝贵的洞见。我们将公开代码与数据。
English
Building effective and efficient Transformer-based large language models (LLMs) has recently become a research focus, requiring maximizing model language capabilities and minimizing training and deployment costs. Existing efforts have primarily described complex relationships among model performance, parameter size, and data size, as well as searched for the optimal compute allocation to train LLMs. However, they overlook the impacts of context length and attention head configuration (the number of query and key-value heads in grouped-query attention) on training and inference. In this paper, we systematically compare models with different parameter sizes, context lengths, and attention head configurations in terms of model performance, computational cost, and memory cost. Then, we extend the existing scaling methods, which are based solely on parameter size and training compute, to guide the construction of cost-optimal LLMs during both training and inference. Our quantitative scaling studies show that, when processing sufficiently long sequences, a larger model with fewer attention heads can achieve a lower loss while incurring lower computational and memory costs. Our findings provide valuable insights for developing practical LLMs, especially in long-context processing scenarios. We will publicly release our code and data.

Summary

AI-Generated Summary

PDF42March 13, 2025