BoostStep:通過改進單步推理來增強大型語言模型的數學能力
BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning
January 6, 2025
作者: Beichen Zhang, Yuhong Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Haodong Duan, Yuhang Cao, Dahua Lin, Jiaqi Wang
cs.AI
摘要
最前沿的大型語言模型(LLMs)展示了在解決複雜數學問題方面的優異表現,採用分治流程並輔以上下文學習(ICL)示例。然而,它們在ICL示例中存在兩個關鍵問題,即粒度不匹配和隨之而來的負面影響噪音問題,限制了其改進潛力。具體而言,LLMs能夠進行分割過程,但在征服步驟中的不準確推理方面大多失敗,同時,在問題粒度的ICL示例中,有時缺乏特定具有挑戰性的推理步驟所需的相關步驟。此外,這種斷裂可能由於其不相關性而阻礙正確推理。因此,我們專注於提高每個步驟內的推理質量,並提出BoostStep。BoostStep對檢索和推理之間的步驟粒度進行了調整,並使用一種新穎的“首次嘗試”策略為每個推理步驟提供高度相關的ICL示例。BoostStep提供比粗糙的問題粒度策略更相關的示例,逐步增強模型在每個步驟內的推理質量。BoostStep是一種通用且強大的推理增強方法,不僅提高獨立推理性能,還與蒙特卡羅樹搜索方法(MCTS)無縫集成,以改進候選生成和決策過程。定量上,它在各種數學基準測試中將GPT-4o和Qwen2.5-Math-72B的性能分別提高了3.6%和2.0%,與MCTS結合後提高了7.5%。
English
Cutting-edge large language models (LLMs) demonstrate promising performance
in solving complex math problems with a divide-and-conquer pipeline and the
assistance of in-context learning (ICL) examples. However, their potential for
improvement is limited by two critical problems within their ICL examples:
granularity-mismatch and the ensuing negative-effect noise problem.
Specifically, the LLMs are capable of the dividing process yet mostly failed by
inaccurate reasoning within a few conquer steps, while the ICL examples
retrieved in question-grained sometimes lack relevant steps for a specific
challenging reasoning step. Further, this disconnect may hinder the correct
reasoning due to its irrelevance. To this end, we focus on improving the
reasoning quality within each step and present BoostStep. BoostStep aligns the
granularity between the retrieving and reasoning on step grained, and provides
highly related ICL examples for each reasoning step with a novel `first-try'
strategy. BoostStep provides more relevant examples than the coarse
question-grained strategy, enhancing the model reasoning quality within each
step steadily. BoostStep is a general and robust reasoning-enhancing method
that not only improves standalone reasoning performance but also integrates
seamlessly with Monte Carlo Tree Search methods (MCTS) to refine both candidate
generation and decision-making. Quantitatively, it improves GPT-4o and
Qwen2.5-Math-72B by 3.6\% and 2.0\% respectively on various mathematical
benchmarks, and 7.5\% gain combined with MCTS.Summary
AI-Generated Summary