ChatPaper.aiChatPaper

並非所有LLM推理器皆相同

Not All LLM Reasoners Are Created Equal

October 2, 2024
作者: Arian Hosseini, Alessandro Sordoni, Daniel Toyama, Aaron Courville, Rishabh Agarwal
cs.AI

摘要

我們研究了語言模型(LLMs)在小學數學(GSM)問題解決能力的深度。為此,我們評估它們在現有數學應用問題對中的表現,其中第二個問題的答案取決於正確回答第一個問題。我們的研究發現大多數LLMs存在顯著的推理差距,即在解決組合問題和獨立解決每個問題之間的表現差異。這種差距在規模較小、成本更有效率且專注於數學的模型中更為明顯。此外,指導調整配方和代碼生成對LLM規模產生不同影響,而在GSM上進行微調可能導致任務過度擬合。我們的分析顯示,大的推理差距不是由於測試集泄漏,而是由於對額外內容的干擾和第二跳推理不足。總的來說,儘管它們在標準基準測試中的表現,LLMs在推理能力上存在系統性差異。
English
We study the depth of grade-school math (GSM) problem-solving capabilities of LLMs. To this end, we evaluate their performance on pairs of existing math word problems together so that the answer to the second problem depends on correctly answering the first problem. Our findings reveal a significant reasoning gap in most LLMs, that is performance difference between solving the compositional pairs and solving each question independently. This gap is more pronounced in smaller, more cost-efficient, and math-specialized models. Moreover, instruction-tuning recipes and code generation have varying effects across LLM sizes, while finetuning on GSM can lead to task overfitting. Our analysis indicates that large reasoning gaps are not because of test-set leakage, but due to distraction from additional context and poor second-hop reasoning. Overall, LLMs exhibit systematic differences in their reasoning abilities, despite what their performance on standard benchmarks indicates.

Summary

AI-Generated Summary

PDF292November 16, 2024