一日单GPU训练:语音语言模型的快速构建
Slamming: Training a Speech Language Model on One GPU in a Day
February 19, 2025
作者: Gallil Maimon, Avishai Elmakies, Yossi Adi
cs.AI
摘要
我们推出Slam,一种在单块学术级GPU上24小时内训练高质量语音语言模型(SLMs)的方案。通过实证分析模型初始化与架构、合成训练数据、基于合成数据的偏好优化及对其他组件的微调,我们实现了这一目标。实验表明,该训练方案在增加计算资源时同样表现出色,能以更低的计算成本取得与顶尖SLMs相当的结果。我们期望这些洞见能使SLM训练与研究更加普及。在SLM扩展定律的背景下,我们的成果远超计算最优性能的预测,为SLM的可行性描绘了一幅乐观的图景。代码、数据、模型及示例详见:https://pages.cs.huji.ac.il/adiyoss-lab/slamming。
English
We introduce Slam, a recipe for training high-quality Speech Language Models
(SLMs) on a single academic GPU in 24 hours. We do so through empirical
analysis of model initialisation and architecture, synthetic training data,
preference optimisation with synthetic data and tweaking all other components.
We empirically demonstrate that this training recipe also scales well with more
compute getting results on par with leading SLMs in a fraction of the compute
cost. We hope these insights will make SLM training and research more
accessible. In the context of SLM scaling laws, our results far outperform
predicted compute optimal performance, giving an optimistic view to SLM
feasibility. See code, data, models, samples at -
https://pages.cs.huji.ac.il/adiyoss-lab/slamming .Summary
AI-Generated Summary