预训练大语言模型中的自适应层跳跃机制
Adaptive Layer-skipping in Pre-trained LLMs
March 31, 2025
作者: Xuan Luo, Weizhi Wang, Xifeng Yan
cs.AI
摘要
为加速大型语言模型(LLMs)中的令牌生成,多种层跳过方法已被提出。然而,这些方法忽视了一个根本性问题:在生成不同令牌时,计算需求如何变化?本研究中,我们引入了FlexiDepth,一种动态调整Transformer层数以进行文本生成的方法。通过集成插件路由器和适配器,FlexiDepth实现了LLMs中的自适应层跳过,而无需修改其原始参数。将FlexiDepth应用于Llama-3-8B模型,成功跳过了32层中的8层,同时保持了100%的基准性能。FlexiDepth的实验结果表明,LLMs中的计算需求根据令牌类型显著变化。具体而言,生成重复令牌或固定短语所需的层数较少,而涉及计算或高不确定性的令牌生成则需要更多层。有趣的是,这种自适应分配模式与人类直觉相吻合。为推动该领域研究,我们开源了FlexiDepth及记录其层分配模式的数据集,以供未来探索。
English
Various layer-skipping methods have been proposed to accelerate token
generation in large language models (LLMs). However, they have overlooked a
fundamental question: How do computational demands vary across the generation
of different tokens? In this work, we introduce FlexiDepth, a method that
dynamically adjusts the number of Transformer layers used in text generation.
By incorporating a plug-in router and adapter, FlexiDepth enables adaptive
layer-skipping in LLMs without modifying their original parameters. Introducing
FlexiDepth to Llama-3-8B model achieves layer skipping of 8 layers out of 32,
and meanwhile maintains the full 100\% benchmark performance. Experimental
results with FlexiDepth demonstrate that computational demands in LLMs
significantly vary based on token type. Specifically, generating repetitive
tokens or fixed phrases requires fewer layers, whereas producing tokens
involving computation or high uncertainty requires more layers. Interestingly,
this adaptive allocation pattern aligns with human intuition. To advance
research in this area, we open sourced FlexiDepth and a dataset documenting
FlexiDepth's layer allocation patterns for future exploration.Summary
AI-Generated Summary