輕鬆遞迴Transformer:透過逐層LoRA實現有效的參數共享
Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA
October 28, 2024
作者: Sangmin Bae, Adam Fisch, Hrayr Harutyunyan, Ziwei Ji, Seungyeon Kim, Tal Schuster
cs.AI
摘要
大型語言模型(LLMs)的部署成本昂貴。參數共享提供了一種可能的途徑來減小它們的大小和成本,但在現代LLMs中,其效果仍然相當有限。在這項工作中,我們重新審視了在Transformer中作為參數共享形式的"層綁定",並引入了將現有LLMs轉換為更小的"遞歸Transformer"的新方法,這些模型在層之間共享參數,並且性能損失最小。在這裡,我們的遞歸Transformer從標準預訓練的Transformer中高效初始化,但僅使用一個獨特層塊,然後在循環中多次重複。我們通過引入Relaxed Recursive Transformers進一步提高性能,通過深度低秩適應(LoRA)模塊為層綁定約束增加靈活性,同時仍保持整體模型的緊湊性。我們展示了我們的遞歸模型(例如,遞歸Gemma 1B)優於類似大小的普通預訓練模型(例如TinyLlama 1.1B和Pythia 1B)以及知識蒸餾基準 - 甚至可以恢復原始的"全尺寸"模型(例如,沒有共享參數的Gemma 2B)的大部分性能。最後,我們提出了連續深度批處理,這是一種有潛力的新推理範式,當與早期退出配對時,由遞歸Transformer實現。在理論分析中,我們顯示這有潛力帶來顯著(2-3倍)的推理吞吐量增益。
English
Large language models (LLMs) are expensive to deploy. Parameter sharing
offers a possible path towards reducing their size and cost, but its
effectiveness in modern LLMs remains fairly limited. In this work, we revisit
"layer tying" as form of parameter sharing in Transformers, and introduce novel
methods for converting existing LLMs into smaller "Recursive Transformers" that
share parameters across layers, with minimal loss of performance. Here, our
Recursive Transformers are efficiently initialized from standard pretrained
Transformers, but only use a single block of unique layers that is then
repeated multiple times in a loop. We further improve performance by
introducing Relaxed Recursive Transformers that add flexibility to the layer
tying constraint via depth-wise low-rank adaptation (LoRA) modules, yet still
preserve the compactness of the overall model. We show that our recursive
models (e.g., recursive Gemma 1B) outperform both similar-sized vanilla
pretrained models (such as TinyLlama 1.1B and Pythia 1B) and knowledge
distillation baselines -- and can even recover most of the performance of the
original "full-size" model (e.g., Gemma 2B with no shared parameters). Finally,
we propose Continuous Depth-wise Batching, a promising new inference paradigm
enabled by the Recursive Transformer when paired with early exiting. In a
theoretical analysis, we show that this has the potential to lead to
significant (2-3x) gains in inference throughput.Summary
AI-Generated Summary