在隱式推理中,LLMs 不是按步驟思考的。
LLMs Do Not Think Step-by-step In Implicit Reasoning
November 24, 2024
作者: Yijiong Yu
cs.AI
摘要
眾所周知,思維鏈(Chain-of-Thought)可以顯著提升大型語言模型(LLMs)在複雜任務上的表現。然而,由於這種方法會導致推論速度變慢並增加計算成本,許多研究已嘗試使用隱式思維鏈(implicit CoT),這種方法無需LLMs明確生成中間步驟。但是,它們的效力與典型的明示思維鏈方法之間仍存在差距。這讓我們懷疑,隱式思維鏈是否真的等同於明示思維鏈?因此,在本研究中,我們通過實驗來探討這個問題。當LLMs執行隱式思維鏈時,我們從模型的隱藏狀態中探測中間步驟的信息。結果令人驚訝地表明,LLMs幾乎不考慮中間步驟,這表明它們可能僅依賴經驗而非嚴格的一步一步推理。此外,我們發現LLMs的隱式推理能力易受影響且不穩定,進一步證實明示思維鏈對有效支持複雜任務的必要性。
English
It has been well-known that Chain-of-Thought can remarkably enhance LLMs'
performance on complex tasks. However, because it also introduces slower
inference speeds and higher computational costs, many researches have attempted
to use implicit CoT, which does not need LLMs to explicitly generate the
intermediate steps. But there is still gap between their efficacy and typical
explicit CoT methods. This leaves us a doubt that, does implicit CoT really
equal to explicit CoT? Therefore, in this study, we address this question
through experiments. We probe the information of intermediate steps from the
model's hidden states when it is performing implicit CoT. The results
surprisingly indicate that LLMs hardly think about intermediate steps,
suggesting they may just rely on experience rather than strict step-by-step
reasoning. Moreover, we find LLMs' implicit reasoning capabilities are
susceptible and unstable, reaffirming the necessity of explicit CoT to
effectively support complex tasks.Summary
AI-Generated Summary