ChatPaper.aiChatPaper

智能擴展:利用小型模型初始化加速大型語言模型預訓練

Scaling Smart: Accelerating Large Language Model Pre-training with Small Model Initialization

September 19, 2024
作者: Mohammad Samragh, Iman Mirzadeh, Keivan Alizadeh Vahid, Fartash Faghri, Minsik Cho, Moin Nabi, Devang Naik, Mehrdad Farajtabar
cs.AI

摘要

語言模型的預訓練階段通常以隨機初始化參數開始。隨著模型規模擴大的趨勢,訓練龐大的參數可能變得極其緩慢且昂貴。相比之下,小型語言模型的訓練成本較低,但通常無法達到大型模型的準確性。本文探索了一個有趣的想法,以連接這兩種不同的情況:我們是否能開發一種方法,使用較小的預訓練模型初始化大型語言模型?這種初始化是否會在訓練時間和最終準確性方面帶來任何好處?本文介紹了一種名為HyperCloning的方法,可以將預訓練語言模型的參數擴展到具有增加隱藏維度的更大模型。我們的方法確保較大模型保留較小模型的功能性。因此,在訓練開始之前,較大模型已經繼承了較小模型的預測能力和準確性。我們證明,訓練這樣初始化的模型可顯著節省用於預訓練大型語言模型所需的GPU時數。
English
The pre-training phase of language models often begins with randomly initialized parameters. With the current trends in scaling models, training their large number of parameters can be extremely slow and costly. In contrast, small language models are less expensive to train, but they often cannot achieve the accuracy of large models. In this paper, we explore an intriguing idea to connect these two different regimes: Can we develop a method to initialize large language models using smaller pre-trained models? Will such initialization bring any benefits in terms of training time and final accuracy? In this paper, we introduce HyperCloning, a method that can expand the parameters of a pre-trained language model to those of a larger model with increased hidden dimensions. Our method ensures that the larger model retains the functionality of the smaller model. As a result, the larger model already inherits the predictive power and accuracy of the smaller model before the training starts. We demonstrate that training such an initialized model results in significant savings in terms of GPU hours required for pre-training large language models.

Summary

AI-Generated Summary

PDF235November 16, 2024