ChatPaper.aiChatPaper

大语言模型是贪婪的智能体:强化学习微调对决策能力的影响

LLMs are Greedy Agents: Effects of RL Fine-tuning on Decision-Making Abilities

April 22, 2025
作者: Thomas Schmied, Jörg Bornschein, Jordi Grau-Moya, Markus Wulfmeier, Razvan Pascanu
cs.AI

摘要

大型语言模型(LLMs)的成功激发了人们对各类智能体应用的广泛兴趣。一个核心假设是,LLMs通过利用常识和链式思维(CoT)推理,能够有效探索并高效解决复杂领域的问题。然而,研究发现LLM智能体存在探索不足和知行差距的问题,即无法有效运用模型中已有的知识进行行动。在本研究中,我们系统性地探讨了LLMs在决策场景中表现欠佳的原因,特别聚焦于三种常见失效模式:贪婪性、频率偏差及知行差距。我们提出通过基于自生成CoT推理的强化学习(RL)微调来缓解这些不足。在多臂老虎机、上下文老虎机及井字棋等实验中的结果表明,RL微调通过增强探索能力和缩小知行差距,显著提升了LLMs的决策能力。最后,我们研究了经典探索机制(如ε-贪婪策略)和LLM特有方法(如自我修正与自我一致性),以更有效地微调LLMs,提升其决策效能。
English
The success of Large Language Models (LLMs) has sparked interest in various agentic applications. A key hypothesis is that LLMs, leveraging common sense and Chain-of-Thought (CoT) reasoning, can effectively explore and efficiently solve complex domains. However, LLM agents have been found to suffer from sub-optimal exploration and the knowing-doing gap, the inability to effectively act on knowledge present in the model. In this work, we systematically study why LLMs perform sub-optimally in decision-making scenarios. In particular, we closely examine three prevalent failure modes: greediness, frequency bias, and the knowing-doing gap. We propose mitigation of these shortcomings by fine-tuning via Reinforcement Learning (RL) on self-generated CoT rationales. Our experiments across multi-armed bandits, contextual bandits, and Tic-tac-toe, demonstrate that RL fine-tuning enhances the decision-making abilities of LLMs by increasing exploration and narrowing the knowing-doing gap. Finally, we study both classic exploration mechanisms, such as epsilon-greedy, and LLM-specific approaches, such as self-correction and self-consistency, to enable more effective fine-tuning of LLMs for decision-making.

Summary

AI-Generated Summary

PDF112April 23, 2025