ChatPaper.aiChatPaper

深度求索-R1思维学:探索大语言模型的推理机制

DeepSeek-R1 Thoughtology: Let's <think> about LLM Reasoning

April 2, 2025
作者: Sara Vera Marjanović, Arkil Patel, Vaibhav Adlakha, Milad Aghajohari, Parishad BehnamGhader, Mehar Bhatia, Aditi Khandelwal, Austin Kraft, Benno Krojer, Xing Han Lù, Nicholas Meade, Dongchan Shin, Amirhossein Kazemnejad, Gaurav Kamath, Marius Mosbach, Karolina Stańczak, Siva Reddy
cs.AI

摘要

DeepSeek-R1等大型推理模型标志着大语言模型(LLM)处理复杂问题方式的根本转变。与直接为给定输入生成答案不同,DeepSeek-R1构建了详细的多步推理链,仿佛在“思考”问题后才给出答案。这一推理过程对用户公开,为研究模型的推理行为提供了无限可能,并开启了“思维学”这一新领域。从DeepSeek-R1推理基本构建模块的分类出发,我们的分析探讨了思维长度的影响及其可控性、长或混乱上下文的管理、文化与安全问题,以及DeepSeek-R1在类人语言处理和世界建模等认知现象中的表现。研究发现描绘了一幅细致入微的图景。特别地,我们揭示了DeepSeek-R1存在一个推理的“最佳点”,额外的推理时间反而可能损害模型性能。此外,我们发现DeepSeek-R1倾向于持续反思先前探索过的问题表述,阻碍了进一步探索。我们还注意到,与非推理模型相比,DeepSeek-R1存在显著的安全漏洞,这也可能危及安全对齐的大语言模型。
English
Large Reasoning Models like DeepSeek-R1 mark a fundamental shift in how LLMs approach complex problems. Instead of directly producing an answer for a given input, DeepSeek-R1 creates detailed multi-step reasoning chains, seemingly "thinking" about a problem before providing an answer. This reasoning process is publicly available to the user, creating endless opportunities for studying the reasoning behaviour of the model and opening up the field of Thoughtology. Starting from a taxonomy of DeepSeek-R1's basic building blocks of reasoning, our analyses on DeepSeek-R1 investigate the impact and controllability of thought length, management of long or confusing contexts, cultural and safety concerns, and the status of DeepSeek-R1 vis-\`a-vis cognitive phenomena, such as human-like language processing and world modelling. Our findings paint a nuanced picture. Notably, we show DeepSeek-R1 has a 'sweet spot' of reasoning, where extra inference time can impair model performance. Furthermore, we find a tendency for DeepSeek-R1 to persistently ruminate on previously explored problem formulations, obstructing further exploration. We also note strong safety vulnerabilities of DeepSeek-R1 compared to its non-reasoning counterpart, which can also compromise safety-aligned LLMs.

Summary

AI-Generated Summary

PDF825April 11, 2025