ChatPaper.aiChatPaper

RATIONALYST:用於改善推理的預訓練過程監督

RATIONALYST: Pre-training Process-Supervision for Improving Reasoning

October 1, 2024
作者: Dongwei Jiang, Guoxuan Wang, Yining Lu, Andrew Wang, Jingyu Zhang, Chuyu Liu, Benjamin Van Durme, Daniel Khashabi
cs.AI

摘要

LLM 生成的推理步驟可能是不完整的,因為它們模仿了在預訓練數據中常見的日常交流中的邏輯跳躍:基本原理經常被留下隱含(未說明)。為了應對這一挑戰,我們引入了RATIONALYST,這是一個基於在龐大的基於理性標註的預訓練數據集上進行過程監督推理的模型。我們從規模龐大的未標記數據集(Pile)和一組具有最少人為干預的推理數據集中提取了 79k 個基本原理。這種針對推理的規模化預訓練使 RATIONALYST 能夠在各種推理任務中保持一致的泛化能力,包括數學、常識、科學和邏輯推理。從 LLaMa-3-8B 進行微調後,RATIONALYST 在 7 個代表性推理基準測試中將推理準確性平均提高了 3.9%。與 GPT-4 等規模明顯更大的驗證模型以及在匹配訓練集上進行微調的大小相似的模型相比,它還展示了更優異的性能。
English
The reasoning steps generated by LLMs might be incomplete, as they mimic logical leaps common in everyday communication found in their pre-training data: underlying rationales are frequently left implicit (unstated). To address this challenge, we introduce RATIONALYST, a model for process-supervision of reasoning based on pre-training on a vast collection of rationale annotations extracted from unlabeled data. We extract 79k rationales from web-scale unlabelled dataset (the Pile) and a combination of reasoning datasets with minimal human intervention. This web-scale pre-training for reasoning allows RATIONALYST to consistently generalize across diverse reasoning tasks, including mathematical, commonsense, scientific, and logical reasoning. Fine-tuned from LLaMa-3-8B, RATIONALYST improves the accuracy of reasoning by an average of 3.9% on 7 representative reasoning benchmarks. It also demonstrates superior performance compared to significantly larger verifiers like GPT-4 and similarly sized models fine-tuned on matching training sets.

Summary

AI-Generated Summary

PDF373November 16, 2024