在連續潛在空間中訓練大型語言模型進行推理

Training Large Language Models to Reason in a Continuous Latent Space

December 9, 2024
作者: Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, Yuandong Tian
cs.AI

摘要

大型語言模型(LLMs)被限制在「語言空間」中進行推理,通常使用一連串思維(CoT)來解決複雜的推理問題。然而,我們認為語言空間並非始終適合推理。例如,大多數詞彙標記主要用於文本連貫性,對於推理並非必要,而一些關鍵標記需要複雜的規劃,對LLMs構成巨大挑戰。為了探索LLM在無限制的潛在空間中進行推理的潛力,而非使用自然語言,我們引入了一個新範式 Coconut(連續思維鏈)。我們利用LLM的最後隱藏狀態作為推理狀態的表示(稱為「連續思維」)。我們不將其解碼為詞彙標記,而是將其直接作為後續輸入嵌入回饋給LLM,以連續空間進行。實驗表明,Coconut可以有效增強LLM在幾個推理任務上的表現。這種新穎的潛在推理範式帶來了新興的高級推理模式:連續思維可以編碼多個替代的下一個推理步驟,使模型能夠執行廣度優先搜索(BFS)來解決問題,而不像CoT那樣過早地承諾單一的確定路徑。在某些需要在規劃過程中進行大量回溯的邏輯推理任務中,Coconut在推論過程中思考標記較少,優於CoT。這些發現展示了潛在推理的潛力,並為未來研究提供了寶貴的見解。
English
Large language models (LLMs) are restricted to reason in the "language space", where they typically express the reasoning process with a chain-of-thought (CoT) to solve a complex reasoning problem. However, we argue that language space may not always be optimal for reasoning. For example, most word tokens are primarily for textual coherence and not essential for reasoning, while some critical tokens require complex planning and pose huge challenges to LLMs. To explore the potential of LLM reasoning in an unrestricted latent space instead of using natural language, we introduce a new paradigm Coconut (Chain of Continuous Thought). We utilize the last hidden state of the LLM as a representation of the reasoning state (termed "continuous thought"). Rather than decoding this into a word token, we feed it back to the LLM as the subsequent input embedding directly in the continuous space. Experiments show that Coconut can effectively augment the LLM on several reasoning tasks. This novel latent reasoning paradigm leads to emergent advanced reasoning patterns: the continuous thought can encode multiple alternative next reasoning steps, allowing the model to perform a breadth-first search (BFS) to solve the problem, rather than prematurely committing to a single deterministic path like CoT. Coconut outperforms CoT in certain logical reasoning tasks that require substantial backtracking during planning, with fewer thinking tokens during inference. These findings demonstrate the promise of latent reasoning and offer valuable insights for future research.

Summary

AI-Generated Summary

PDF747December 10, 2024