ChatPaper.aiChatPaper

在为快速思考与慢速思考进行训练时,LLMs层发生了什么:一个梯度视角。

What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective

October 31, 2024
作者: Ming Li, Yanhong Li, Tianyi Zhou
cs.AI

摘要

在LLM后训练中有何不同?我们通过梯度的视角调查了大型语言模型(LLMs)不同层的训练模式,这些模型在使用不同响应和初始模型进行训练时。我们特别关注快速思考与慢速思考对层间梯度的影响,鉴于最近在推理路径(如思维链和过程奖励)上训练LLMs变得流行。在我们的研究中,没有思维链的快速思考导致更大的梯度和跨层梯度差异比详细思维链(Detailed CoT)的慢速思考更大,表明后者带来的学习稳定性。此外,预训练的LLMs受快速思考的不稳定性影响较少,而经过指导调整的LLMs受影响较大。此外,我们研究了梯度模式是否能反映在使用慢速和快速思维路径训练不同LLMs时响应的正确性。结果显示,慢速思考的梯度可以区分正确和无关的推理路径。作为比较,我们在非推理知识学习任务上进行类似的梯度分析,然而,简单增加响应长度并不会导致慢速思考的类似行为。我们的研究加强了对LLM训练的基本理解,并为其效率和稳定性提供了新的见解,为构建可推广的System-2代理铺平了道路。我们的代码、数据和梯度统计可在以下链接找到:https://github.com/MingLiiii/Layer_Gradient。
English
What makes a difference in the post-training of LLMs? We investigate the training patterns of different layers in large language models (LLMs), through the lens of gradient, when training with different responses and initial models. We are specifically interested in how fast vs. slow thinking affects the layer-wise gradients, given the recent popularity of training LLMs on reasoning paths such as chain-of-thoughts (CoT) and process rewards. In our study, fast thinking without CoT leads to larger gradients and larger differences of gradients across layers than slow thinking (Detailed CoT), indicating the learning stability brought by the latter. Moreover, pre-trained LLMs are less affected by the instability of fast thinking than instruction-tuned LLMs. Additionally, we study whether the gradient patterns can reflect the correctness of responses when training different LLMs using slow vs. fast thinking paths. The results show that the gradients of slow thinking can distinguish correct and irrelevant reasoning paths. As a comparison, we conduct similar gradient analyses on non-reasoning knowledge learning tasks, on which, however, trivially increasing the response length does not lead to similar behaviors of slow thinking. Our study strengthens fundamental understandings of LLM training and sheds novel insights on its efficiency and stability, which pave the way towards building a generalizable System-2 agent. Our code, data, and gradient statistics can be found in: https://github.com/MingLiiii/Layer_Gradient.

Summary

AI-Generated Summary

PDF644November 13, 2024