小心脚步(逐步):思维链可能会降低在思考使人类变得更糟的任务上的表现
Mind Your Step (by Step): Chain-of-Thought can Reduce Performance on Tasks where Thinking Makes Humans Worse
October 27, 2024
作者: Ryan Liu, Jiayi Geng, Addison J. Wu, Ilia Sucholutsky, Tania Lombrozo, Thomas L. Griffiths
cs.AI
摘要
思维链(CoT)提示已成为处理大型语言和多模态模型的常用策略。虽然已经证明CoT可以提高许多任务的性能,但确定其有效性的设置仍需持续努力。特别是,在哪些设置中CoT系统地降低模型性能仍然是一个未解之谜。在本文中,我们试图通过从认知心理学中汲取灵感,研究口头思维或深思对人类性能造成负面影响的情况,以及规定人类性能的约束是否适用于语言模型,来识别CoT降低性能的任务特征。三种这样的情况是隐式统计学习、视觉识别以及包含异常模式的分类。在跨越这三种情境的广泛实验中,我们发现一系列最先进的模型在推理时间推理与零短推理相比表现出显著的性能下降(例如,与GPT-4o相比,OpenAI o1-preview的绝对准确率下降了高达36.3%)。我们还确定了三个满足条件(i)但不满足条件(ii)的任务,并发现在这些任务中,口头思维降低了人类的性能,而CoT保持或提高了模型的性能。总的来说,我们的结果表明,虽然模型的认知过程与人类的认知过程之间没有完全的平行,但考虑到思考对人类性能产生负面影响的情况,可以帮助我们确定它对模型产生负面影响的情境。通过将人类深思研究与CoT评估联系起来,我们提供了一种新工具,可用于理解提示选择和推理时间推理的影响。
English
Chain-of-thought (CoT) prompting has become a widely used strategy for
working with large language and multimodal models. While CoT has been shown to
improve performance across many tasks, determining the settings in which it is
effective remains an ongoing effort. In particular, it is still an open
question in what settings CoT systematically reduces model performance. In this
paper, we seek to identify the characteristics of tasks where CoT reduces
performance by drawing inspiration from cognitive psychology, looking at cases
where (i) verbal thinking or deliberation hurts performance in humans, and (ii)
the constraints governing human performance generalize to language models.
Three such cases are implicit statistical learning, visual recognition, and
classifying with patterns containing exceptions. In extensive experiments
across all three settings, we find that a diverse collection of
state-of-the-art models exhibit significant drop-offs in performance (e.g., up
to 36.3% absolute accuracy for OpenAI o1-preview compared to GPT-4o) when using
inference-time reasoning compared to zero-shot counterparts. We also identify
three tasks that satisfy condition (i) but not (ii), and find that while verbal
thinking reduces human performance in these tasks, CoT retains or increases
model performance. Overall, our results show that while there is not an exact
parallel between the cognitive processes of models and those of humans,
considering cases where thinking has negative consequences for human
performance can help us identify settings where it negatively impacts models.
By connecting the literature on human deliberation with evaluations of CoT, we
offer a new tool that can be used in understanding the impact of prompt choices
and inference-time reasoning.Summary
AI-Generated Summary