GSM-Symbolic:理解大型語言模型中數學推理的局限性
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
October 7, 2024
作者: Iman Mirzadeh, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, Mehrdad Farajtabar
cs.AI
摘要
近年來,大型語言模型(LLMs)的最新進展引起了人們對其形式推理能力的興趣,特別是在數學領域。GSM8K基準被廣泛用於評估模型對小學水平問題的數學推理能力。儘管LLMs在GSM8K上的表現近年來顯著提高,但它們的數學推理能力是否真正進步仍不清楚,這引發了對所報告指標可靠性的質疑。為了解決這些問題,我們對幾個最先進的開放和封閉模型進行了大規模研究。為了克服現有評估的限制,我們引入了GSM-Symbolic,這是一個改進的基準,由符號模板創建,可以生成多樣化的問題。GSM-Symbolic實現了更可控的評估,為測量模型推理能力提供了關鍵見解和更可靠的指標。我們的研究發現,LLMs對同一問題的不同實例作出回應時存在明顯變化。具體而言,當只更改GSM-Symbolic基準中問題中的數值時,所有模型的表現都會下降。此外,我們研究了這些模型數學推理的脆弱性,並顯示隨著問題中子句數量的增加,它們的表現顯著下降。我們假設這種下降是因為當前的LLMs無法進行真正的邏輯推理;它們僅從訓練數據中複製推理步驟。即使子句與問題看似相關,添加一個子句也會導致所有最先進模型的顯著性能下降(高達65%),即使該子句對於最終答案所需的推理鏈沒有貢獻。總的來說,我們的工作提供了對LLMs在數學推理方面能力和限制更為細緻的理解。
English
Recent advancements in Large Language Models (LLMs) have sparked interest in
their formal reasoning capabilities, particularly in mathematics. The GSM8K
benchmark is widely used to assess the mathematical reasoning of models on
grade-school-level questions. While the performance of LLMs on GSM8K has
significantly improved in recent years, it remains unclear whether their
mathematical reasoning capabilities have genuinely advanced, raising questions
about the reliability of the reported metrics. To address these concerns, we
conduct a large-scale study on several SOTA open and closed models. To overcome
the limitations of existing evaluations, we introduce GSM-Symbolic, an improved
benchmark created from symbolic templates that allow for the generation of a
diverse set of questions. GSM-Symbolic enables more controllable evaluations,
providing key insights and more reliable metrics for measuring the reasoning
capabilities of models.Our findings reveal that LLMs exhibit noticeable
variance when responding to different instantiations of the same question.
Specifically, the performance of all models declines when only the numerical
values in the question are altered in the GSM-Symbolic benchmark. Furthermore,
we investigate the fragility of mathematical reasoning in these models and show
that their performance significantly deteriorates as the number of clauses in a
question increases. We hypothesize that this decline is because current LLMs
cannot perform genuine logical reasoning; they replicate reasoning steps from
their training data. Adding a single clause that seems relevant to the question
causes significant performance drops (up to 65%) across all state-of-the-art
models, even though the clause doesn't contribute to the reasoning chain needed
for the final answer. Overall, our work offers a more nuanced understanding of
LLMs' capabilities and limitations in mathematical reasoning.Summary
AI-Generated Summary