BoostStep:通过改进的单步推理提升大型语言模型的数学能力

BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning

January 6, 2025
作者: Beichen Zhang, Yuhong Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Haodong Duan, Yuhang Cao, Dahua Lin, Jiaqi Wang
cs.AI

摘要

最前沿的大型语言模型(LLMs)展示了在解决复杂数学问题方面的有希望表现,采用分而治之的流程和上下文学习(ICL)示例的辅助。然而,它们在ICL示例中的潜力受到两个关键问题的限制:粒度不匹配和随之而来的负面影响噪声问题。具体来说,LLMs能够进行划分过程,但在征服步骤中的不准确推理导致失败,而有时在问题粒度中检索到的ICL示例缺乏特定具有挑战性的推理步骤所需的相关步骤。此外,这种不连贯可能由于其不相关性而阻碍正确推理。因此,我们专注于改进每个步骤内的推理质量,并提出BoostStep。BoostStep在检索和推理之间的步骤粒度上进行对齐,并为每个推理步骤提供与之高度相关的ICL示例,采用一种新颖的“首次尝试”策略。BoostStep提供比粗糙问题粒度策略更相关的示例,稳步增强模型在每个步骤内的推理质量。BoostStep是一种通用且强大的推理增强方法,不仅提高独立推理性能,还与蒙特卡洛树搜索方法(MCTS)无缝集成,以改进候选生成和决策制定。定量上,它将GPT-4o和Qwen2.5-Math-72B分别在各种数学基准上提高了3.6\%和2.0\%,与MCTS相结合可获得7.5\%的增益。
English
Cutting-edge large language models (LLMs) demonstrate promising performance in solving complex math problems with a divide-and-conquer pipeline and the assistance of in-context learning (ICL) examples. However, their potential for improvement is limited by two critical problems within their ICL examples: granularity-mismatch and the ensuing negative-effect noise problem. Specifically, the LLMs are capable of the dividing process yet mostly failed by inaccurate reasoning within a few conquer steps, while the ICL examples retrieved in question-grained sometimes lack relevant steps for a specific challenging reasoning step. Further, this disconnect may hinder the correct reasoning due to its irrelevance. To this end, we focus on improving the reasoning quality within each step and present BoostStep. BoostStep aligns the granularity between the retrieving and reasoning on step grained, and provides highly related ICL examples for each reasoning step with a novel `first-try' strategy. BoostStep provides more relevant examples than the coarse question-grained strategy, enhancing the model reasoning quality within each step steadily. BoostStep is a general and robust reasoning-enhancing method that not only improves standalone reasoning performance but also integrates seamlessly with Monte Carlo Tree Search methods (MCTS) to refine both candidate generation and decision-making. Quantitatively, it improves GPT-4o and Qwen2.5-Math-72B by 3.6\% and 2.0\% respectively on various mathematical benchmarks, and 7.5\% gain combined with MCTS.

Summary

AI-Generated Summary

PDF352January 7, 2025