拼圖:基於蒸餾的推論優化LLM的NAS
Puzzle: Distillation-Based NAS for Inference-Optimized LLMs
November 28, 2024
作者: Akhiad Bercovich, Tomer Ronen, Talor Abramovich, Nir Ailon, Nave Assaf, Mohammad Dabbah, Ido Galil, Amnon Geifman, Yonatan Geifman, Izhak Golan, Netanel Haber, Ehud Karpas, Itay Levy, Shahar Mor, Zach Moshe, Najeeb Nabwani, Omri Puny, Ran Rubin, Itamar Schen, Ido Shahaf, Oren Tropp, Omer Ullman Argov, Ran Zilberstein, Ran El-Yaniv
cs.AI
摘要
大型語言模型(LLMs)展示了卓越的能力,但在推論過程中高計算成本限制了它們的應用。增加參數數量可提高準確性,但也擴大了最先進功能與實際部署能力之間的差距。我們提出了Puzzle框架,用於在特定硬體上加速LLM推論,同時保留其能力。通過在前所未有的規模上創新應用神經架構搜索(NAS),Puzzle系統地優化了在硬體限制下擁有數百億參數的模型。我們的方法利用區塊式本地知識蒸餾(BLD)進行並行架構探索,並採用混合整數規劃進行精確的限制優化。
我們通過Llama-3.1-Nemotron-51B-Instruct(Nemotron-51B)展示了我們框架的現實影響,這是一個從Llama-3.1-70B-Instruct衍生出的公開模型。 Nemotron-51B實現了2.17倍的推論吞吐量加速,在單個NVIDIA H100 GPU上運行,同時保留了原始模型98.4%的能力。 Nemotron-51B目前是最準確的語言模型之一,能夠在單個GPU上進行推論,並支持大批量大小。值得注意的是,這種轉變僅需要45B的訓練標記,而70B模型需要超過15T的標記。這樹立了一個新的範式,即強大的模型可以經過優化以實現高效部署,同時僅對其能力進行微不足道的妥協,這表明推論性能,而不僅僅是參數數量,應該引導模型選擇。隨著Nemotron-51B的發布和Puzzle框架的展示,我們為從業者提供了立即獲取最先進語言建模能力的機會,並大幅降低了計算成本。
English
Large language models (LLMs) have demonstrated remarkable capabilities, but
their adoption is limited by high computational costs during inference. While
increasing parameter counts enhances accuracy, it also widens the gap between
state-of-the-art capabilities and practical deployability. We present Puzzle, a
framework to accelerate LLM inference on specific hardware while preserving
their capabilities. Through an innovative application of neural architecture
search (NAS) at an unprecedented scale, Puzzle systematically optimizes models
with tens of billions of parameters under hardware constraints. Our approach
utilizes blockwise local knowledge distillation (BLD) for parallel architecture
exploration and employs mixed-integer programming for precise constraint
optimization.
We demonstrate the real-world impact of our framework through
Llama-3.1-Nemotron-51B-Instruct (Nemotron-51B), a publicly available model
derived from Llama-3.1-70B-Instruct. Nemotron-51B achieves a 2.17x inference
throughput speedup, fitting on a single NVIDIA H100 GPU while preserving 98.4%
of the original model's capabilities. Nemotron-51B currently stands as the most
accurate language model capable of inference on a single GPU with large batch
sizes. Remarkably, this transformation required just 45B training tokens,
compared to over 15T tokens used for the 70B model it was derived from. This
establishes a new paradigm where powerful models can be optimized for efficient
deployment with only negligible compromise of their capabilities, demonstrating
that inference performance, not parameter count alone, should guide model
selection. With the release of Nemotron-51B and the presentation of the Puzzle
framework, we provide practitioners immediate access to state-of-the-art
language modeling capabilities at significantly reduced computational costs.Summary
AI-Generated Summary