极简主义视角下的LLM推理:从拒绝采样到强化学习
A Minimalist Approach to LLM Reasoning: from Rejection Sampling to Reinforce
April 15, 2025
作者: Wei Xiong, Jiarui Yao, Yuhui Xu, Bo Pang, Lei Wang, Doyen Sahoo, Junnan Li, Nan Jiang, Tong Zhang, Caiming Xiong, Hanze Dong
cs.AI
摘要
强化学习(RL)已成为在复杂推理任务上微调大型语言模型(LLMs)的主流方法。在近期的方法中,GRPO因其在训练如DeepSeek-R1等模型上的实证成功而脱颖而出,但其有效性的根源仍不甚明了。本研究从类强化算法视角重新审视GRPO,并剖析其核心组件。令人惊讶的是,我们发现一个仅基于正向奖励样本进行训练的简单拒绝采样基线——RAFT,其性能与GRPO和PPO相当。通过消融研究,我们揭示出GRPO的主要优势在于摒弃了完全错误响应的提示,而非其奖励归一化机制。受此启发,我们提出了Reinforce-Rej,这是策略梯度的一个最小扩展,它同时过滤完全错误和完全正确的样本。Reinforce-Rej提升了KL效率与稳定性,作为更复杂RL算法的一个轻量级且有效的替代方案。我们推荐RAFT作为一个稳健且可解释的基线,并建议未来的进展应聚焦于更原则性地设计如何融入负样本,而非不加区分地依赖它们。我们的发现为基于奖励的LLM后训练未来工作提供了指导。
English
Reinforcement learning (RL) has become a prevailing approach for fine-tuning
large language models (LLMs) on complex reasoning tasks. Among recent methods,
GRPO stands out for its empirical success in training models such as
DeepSeek-R1, yet the sources of its effectiveness remain poorly understood. In
this work, we revisit GRPO from a reinforce-like algorithm perspective and
analyze its core components. Surprisingly, we find that a simple rejection
sampling baseline, RAFT, which trains only on positively rewarded samples,
yields competitive performance than GRPO and PPO. Our ablation studies reveal
that GRPO's main advantage arises from discarding prompts with entirely
incorrect responses, rather than from its reward normalization. Motivated by
this insight, we propose Reinforce-Rej, a minimal extension of policy gradient
that filters both entirely incorrect and entirely correct samples.
Reinforce-Rej improves KL efficiency and stability, serving as a lightweight
yet effective alternative to more complex RL algorithms. We advocate RAFT as a
robust and interpretable baseline, and suggest that future advances should
focus on more principled designs for incorporating negative samples, rather
than relying on them indiscriminately. Our findings provide guidance for future
work in reward-based LLM post-training.Summary
AI-Generated Summary