ChatPaper.aiChatPaper

謹慎行事(一步一步):思維連鎖效應可能降低在思考使人類表現更差的任務上的表現

Mind Your Step (by Step): Chain-of-Thought can Reduce Performance on Tasks where Thinking Makes Humans Worse

October 27, 2024
作者: Ryan Liu, Jiayi Geng, Addison J. Wu, Ilia Sucholutsky, Tania Lombrozo, Thomas L. Griffiths
cs.AI

摘要

思維鏈(CoT)提示已成為處理大型語言和多模型的廣泛使用策略。儘管已證明 CoT 可改善許多任務的表現,但確定其有效性的設置仍需持續努力。特別是,在哪些設置中 CoT 會系統地降低模型性能仍是一個未解之謎。本文旨在從性能降低的任務特徵中識別 CoT 的案例,靈感來自認知心理學,觀察在這些案例中(i)口頭思考或深思熟慮損害人類表現,以及(ii)管理人類表現的限制是否適用於語言模型。這三種情況是隱式統計學習、視覺識別和包含異常模式的分類。在跨越這三種設置的廣泛實驗中,我們發現一系列最先進模型在推理時間推理相對於零猜測對照組時表現顯著下降(例如,與 GPT-4o 相比,OpenAI o1-preview 的絕對準確率下降了多達 36.3%)。我們還確定了三個滿足條件(i)但不滿足條件(ii)的任務,發現在這些任務中口頭思考會降低人類表現,但 CoT 會保持或提高模型性能。總的來說,我們的結果顯示,儘管模型的認知過程與人類的認知過程之間沒有確切的平行關係,但考慮思考對人類表現產生負面後果的情況可以幫助我們識別對模型產生負面影響的設置。通過將人類深思熟慮的文獻與 CoT 的評估相連結,我們提供了一種新工具,可用於理解提示選擇和推理時間推理的影響。
English
Chain-of-thought (CoT) prompting has become a widely used strategy for working with large language and multimodal models. While CoT has been shown to improve performance across many tasks, determining the settings in which it is effective remains an ongoing effort. In particular, it is still an open question in what settings CoT systematically reduces model performance. In this paper, we seek to identify the characteristics of tasks where CoT reduces performance by drawing inspiration from cognitive psychology, looking at cases where (i) verbal thinking or deliberation hurts performance in humans, and (ii) the constraints governing human performance generalize to language models. Three such cases are implicit statistical learning, visual recognition, and classifying with patterns containing exceptions. In extensive experiments across all three settings, we find that a diverse collection of state-of-the-art models exhibit significant drop-offs in performance (e.g., up to 36.3% absolute accuracy for OpenAI o1-preview compared to GPT-4o) when using inference-time reasoning compared to zero-shot counterparts. We also identify three tasks that satisfy condition (i) but not (ii), and find that while verbal thinking reduces human performance in these tasks, CoT retains or increases model performance. Overall, our results show that while there is not an exact parallel between the cognitive processes of models and those of humans, considering cases where thinking has negative consequences for human performance can help us identify settings where it negatively impacts models. By connecting the literature on human deliberation with evaluations of CoT, we offer a new tool that can be used in understanding the impact of prompt choices and inference-time reasoning.

Summary

AI-Generated Summary

PDF122November 16, 2024