TÜLU 3:拓展開放式語言模型後訓練的前沿

TÜLU 3: Pushing Frontiers in Open Language Model Post-Training

November 22, 2024
作者: Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James V. Miranda, Alisa Liu, Nouha Dziri, Shane Lyu, Yuling Gu, Saumya Malik, Victoria Graf, Jena D. Hwang, Jiangjiang Yang, Ronan Le Bras, Oyvind Tafjord, Chris Wilhelm, Luca Soldaini, Noah A. Smith, Yizhong Wang, Pradeep Dasigi, Hannaneh Hajishirzi
cs.AI

摘要

語言模型事後訓練被應用來精煉行為並開啟新技能,涵蓋了廣泛的最新語言模型,但開放的應用指南仍遠遠落後於專有的方法。事後訓練的基礎訓練數據和指南同時是謎題中最重要的部分,也是最不透明的部分。為了彌合這一差距,我們推出 T\"ULU 3,這是一個全面開放的最先進的事後訓練模型系列,連同其數據、代碼和訓練指南,作為現代事後訓練技術的全面指南。T\"ULU 3 基於 Llama 3.1 基礎模型,取得了超越 Llama 3.1、Qwen 2.5、Mistral 甚至閉源模型如 GPT-4o-mini 和 Claude 3.5-Haiku 的結果。我們模型的訓練算法包括監督微調(SFT)、直接偏好優化(DPO)以及我們稱之為具有可驗證獎勵的強化學習方法(RLVR)。通過 T\"ULU 3,我們引入了一種多任務評估方案,用於事後訓練指南的開發和未見過的評估,標準基準實現,以及對所述基準上現有開放數據集的大幅凈化。我們最後對未能可靠提升性能的訓練方法進行了分析和討論。 除了 T\"ULU 3 模型權重和演示之外,我們還發布了完整的指南,其中包括多樣核心技能的數據集、用於數據整理和評估的強大工具包,訓練代碼和基礎設施,最重要的是,一份詳細報告,用於重現並進一步適應 T\"ULU 3 方法以擴展到更多領域。
English
Language model post-training is applied to refine behaviors and unlock new skills across a wide range of recent language models, but open recipes for applying these techniques lag behind proprietary ones. The underlying training data and recipes for post-training are simultaneously the most important pieces of the puzzle and the portion with the least transparency. To bridge this gap, we introduce T\"ULU 3, a family of fully-open state-of-the-art post-trained models, alongside its data, code, and training recipes, serving as a comprehensive guide for modern post-training techniques. T\"ULU 3, which builds on Llama 3.1 base models, achieves results surpassing the instruct versions of Llama 3.1, Qwen 2.5, Mistral, and even closed models such as GPT-4o-mini and Claude 3.5-Haiku. The training algorithms for our models include supervised finetuning (SFT), Direct Preference Optimization (DPO), and a novel method we call Reinforcement Learning with Verifiable Rewards (RLVR). With T\"ULU 3, we introduce a multi-task evaluation scheme for post-training recipes with development and unseen evaluations, standard benchmark implementations, and substantial decontamination of existing open datasets on said benchmarks. We conclude with analysis and discussion of training methods that did not reliably improve performance. In addition to the T\"ULU 3 model weights and demo, we release the complete recipe -- including datasets for diverse core skills, a robust toolkit for data curation and evaluation, the training code and infrastructure, and, most importantly, a detailed report for reproducing and further adapting the T\"ULU 3 approach to more domains.

Summary

AI-Generated Summary

PDF582November 25, 2024