ChatPaper.aiChatPaper

链式草稿:通过少写实现快速思考

Chain of Draft: Thinking Faster by Writing Less

February 25, 2025
作者: Silei Xu, Wenhao Xie, Lingxiao Zhao, Pengcheng He
cs.AI

摘要

大型语言模型(LLMs)通过诸如思维链(CoT)提示等机制,在解决复杂推理任务中展现了卓越性能,该机制强调详尽、逐步的推理过程。然而,人类通常采用更为高效的策略:即起草简洁的中间思考,仅捕捉关键信息。在本研究中,我们提出了一种受人类认知过程启发的新范式——草稿链(CoD),在此模式下,LLMs在解决任务时生成简约而信息丰富的中间推理输出。通过减少冗长并聚焦于关键洞察,CoD在准确性上媲美甚至超越CoT,同时仅使用7.6%的token,显著降低了各类推理任务的成本与延迟。
English
Large Language Models (LLMs) have demonstrated remarkable performance in solving complex reasoning tasks through mechanisms like Chain-of-Thought (CoT) prompting, which emphasizes verbose, step-by-step reasoning. However, humans typically employ a more efficient strategy: drafting concise intermediate thoughts that capture only essential information. In this work, we propose Chain of Draft (CoD), a novel paradigm inspired by human cognitive processes, where LLMs generate minimalistic yet informative intermediate reasoning outputs while solving tasks. By reducing verbosity and focusing on critical insights, CoD matches or surpasses CoT in accuracy while using as little as only 7.6% of the tokens, significantly reducing cost and latency across various reasoning tasks.

Summary

AI-Generated Summary

PDF444March 3, 2025