ChatPaper.aiChatPaper

Satori:通过动作思维链增强的强化学习提升LLM推理的自回归搜索

Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search

February 4, 2025
作者: Maohao Shen, Guangtao Zeng, Zhenting Qi, Zhang-Wei Hong, Zhenfang Chen, Wei Lu, Gregory Wornell, Subhro Das, David Cox, Chuang Gan
cs.AI

摘要

大型语言模型(LLMs)展示了在不同领域具有显著推理能力的能力。最近的研究表明,增加测试时计算可以增强LLMs的推理能力。这通常涉及在推理时进行大量采样,由外部LLM验证器指导,形成一个双人系统。尽管有外部指导,但该系统的有效性展示了单个LLM处理复杂任务的潜力。因此,我们提出了一个新的研究问题:我们能否内部化搜索能力以从根本上增强单个LLM的推理能力?本文探讨了一个正交方向,专注于用于自回归搜索的后训练LLMs(即,具有自我反思和自我探索新策略的扩展推理过程)。为实现这一目标,我们提出了“行动思维链”(COAT)推理和一个两阶段训练范式:1)一个小规模格式调整阶段,以内部化COAT推理格式;2)一个利用强化学习的大规模自我改进阶段。我们的方法产生了Satori,一个基于开源模型和数据训练的7B LLM。广泛的实证评估表明,Satori在数学推理基准上实现了最先进的性能,同时对领域外任务具有很强的泛化能力。代码、数据和模型将完全开源。
English
Large language models (LLMs) have demonstrated remarkable reasoning capabilities across diverse domains. Recent studies have shown that increasing test-time computation enhances LLMs' reasoning capabilities. This typically involves extensive sampling at inference time guided by an external LLM verifier, resulting in a two-player system. Despite external guidance, the effectiveness of this system demonstrates the potential of a single LLM to tackle complex tasks. Thus, we pose a new research problem: Can we internalize the searching capabilities to fundamentally enhance the reasoning abilities of a single LLM? This work explores an orthogonal direction focusing on post-training LLMs for autoregressive searching (i.e., an extended reasoning process with self-reflection and self-exploration of new strategies). To achieve this, we propose the Chain-of-Action-Thought (COAT) reasoning and a two-stage training paradigm: 1) a small-scale format tuning stage to internalize the COAT reasoning format and 2) a large-scale self-improvement stage leveraging reinforcement learning. Our approach results in Satori, a 7B LLM trained on open-source models and data. Extensive empirical evaluations demonstrate that Satori achieves state-of-the-art performance on mathematical reasoning benchmarks while exhibits strong generalization to out-of-domain tasks. Code, data, and models will be fully open-sourced.

Summary

AI-Generated Summary

PDF232February 5, 2025