Fine-Tuning批判:学会批判比学会模仿更有效
Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate
January 29, 2025
作者: Yubo Wang, Xiang Yue, Wenhu Chen
cs.AI
摘要
监督微调(SFT)通常用于训练语言模型模仿给定指令的注释响应。在本文中,我们挑战这一范式,并提出批判性微调(CFT),这是一种策略,模型学习批判性地分析嘈杂的响应,而不仅仅是简单地模仿正确的响应。受强调批判性思维的人类学习过程的启发,CFT鼓励更深入的分析和细致的理解,这些特征常常被标准SFT忽视。为了验证CFT的有效性,我们从WebInstruct构建了一个包含5万个样本的数据集,使用GPT-4o作为教师生成批评,形式为(输入=[查询;嘈杂响应],输出=批评)。在这个数据集上进行的CFT相对于六个数学基准测试中的不同基础模型(如Qwen2.5、Qwen2.5-Math和DeepSeek-Math)的SFT表现出了一致的4-10%的改进。我们进一步扩展到MetaMath和NuminaMath数据集,并观察到相对于SFT的类似增益。值得注意的是,我们的Qwen2.5-Math-CFT模型仅在5万个样本上训练,与使用超过2百万个样本的竞争模型AceMath和Qwen2.5-Math-Instruct在大多数基准测试中相匹敌或表现更好。消融研究表明,CFT对嘈杂响应来源和教师批评模型具有鲁棒性。通过这些发现,我们认为基于批评的训练提供了一个更有效的选择,以推进语言模型的推理能力。
English
Supervised Fine-Tuning (SFT) is commonly used to train language models to
imitate annotated responses for given instructions. In this paper, we challenge
this paradigm and propose Critique Fine-Tuning (CFT), a strategy where models
learn to critique noisy responses rather than simply imitate correct ones.
Inspired by human learning processes that emphasize critical thinking, CFT
encourages deeper analysis and nuanced understanding-traits often overlooked by
standard SFT. To validate the effectiveness of CFT, we construct a 50K-sample
dataset from WebInstruct, using GPT-4o as the teacher to generate critiques in
the form of (input=[query; noisy response], output=critique). CFT on this
dataset yields a consistent 4-10% improvement over SFT on six math benchmarks
with different base models like Qwen2.5, Qwen2.5-Math and DeepSeek-Math. We
further expand to MetaMath and NuminaMath datasets and observe similar gains
over SFT. Notably, our Qwen2.5-Math-CFT model-trained on just 50K
samples-matches or outperforms competitive models such as AceMath and
Qwen2.5-Math-Instruct on most benchmarks, both of which use over 2M samples.
Ablation studies show that CFT is robust to the source of noisy response and
teacher critique model. Through these findings, we argue that critique-based
training offers a more effective alternative to advance the reasoning of
language models.Summary
AI-Generated Summary