LLM 多步推理的离线强化学习
Offline Reinforcement Learning for LLM Multi-Step Reasoning
December 20, 2024
作者: Huaijie Wang, Shibo Hao, Hanze Dong, Shenao Zhang, Yilin Bao, Ziran Yang, Yi Wu
cs.AI
摘要
通过离线强化学习(RL)改进大型语言模型(LLMs)的多步推理能力对于快速适应复杂任务至关重要。虽然直接偏好优化(DPO)在将LLMs与人类偏好对齐方面表现出潜力,但对于多步推理任务来说不太合适,原因是:(1)DPO依赖于成对偏好数据,这对于多步推理任务并不readily可用,以及(2)它将所有token一视同仁,这使得在多步推理任务中进行信用分配变得低效,因为这类任务通常伴随着稀疏奖励。在这项工作中,我们提出了OREO(离线推理优化),这是一种用于增强LLM多步推理能力的离线RL方法。基于先前最大熵强化学习的工作见解,OREO通过优化软Bellman方程共同学习策略模型和值函数。我们原理上展示了它减少了收集成对数据的需求,并实现了更好的信用分配。在经验上,OREO在多步推理基准测试中表现优于现有的离线学习方法,包括数学推理任务(GSM8K,MATH)和具身体代理控制(ALFWorld)。该方法可以在额外资源可用时扩展为多次迭代框架。此外,学习到的值函数可以被利用来免费指导树搜索,这在测试时可以进一步提高性能。
English
Improving the multi-step reasoning ability of large language models (LLMs)
with offline reinforcement learning (RL) is essential for quickly adapting them
to complex tasks. While Direct Preference Optimization (DPO) has shown promise
in aligning LLMs with human preferences, it is less suitable for multi-step
reasoning tasks because (1) DPO relies on paired preference data, which is not
readily available for multi-step reasoning tasks, and (2) it treats all tokens
uniformly, making it ineffective for credit assignment in multi-step reasoning
tasks, which often come with sparse reward. In this work, we propose OREO
(Offline Reasoning Optimization), an offline RL method for enhancing LLM
multi-step reasoning. Building on insights from previous works of maximum
entropy reinforcement learning, it jointly learns a policy model and value
function by optimizing the soft Bellman Equation. We show in principle that it
reduces the need to collect pairwise data and enables better credit assignment.
Empirically, OREO surpasses existing offline learning methods on multi-step
reasoning benchmarks, including mathematical reasoning tasks (GSM8K, MATH) and
embodied agent control (ALFWorld). The approach can be extended to a
multi-iteration framework when additional resources are available. Furthermore,
the learned value function can be leveraged to guide the tree search for free,
which can further boost performance during test time.Summary
AI-Generated Summary