ChatPaper.aiChatPaper

利用潜在推理扩展测试时间计算:一种循环深度方法

Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach

February 7, 2025
作者: Jonas Geiping, Sean McLeish, Neel Jain, John Kirchenbauer, Siddharth Singh, Brian R. Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, Tom Goldstein
cs.AI

摘要

我们研究了一种新颖的语言模型架构,能够通过在潜在空间隐式推理来扩展测试时的计算能力。我们的模型通过迭代循环块来工作,在测试时可以展开到任意深度。这与主流推理模型不同,后者通过生成更多标记来扩展计算。与基于思维链的方法不同,我们的方法不需要任何专门的训练数据,可以处理小的上下文窗口,并且能够捕捉那些不容易用文字表示的推理类型。我们将一个概念验证模型扩展到了35亿参数和8000亿标记。我们展示了结果模型在推理基准测试中可以提高性能,有时甚至可以相当显著地,达到相当于500亿参数的计算负载。
English
We study a novel language model architecture that is capable of scaling test-time computation by implicitly reasoning in latent space. Our model works by iterating a recurrent block, thereby unrolling to arbitrary depth at test-time. This stands in contrast to mainstream reasoning models that scale up compute by producing more tokens. Unlike approaches based on chain-of-thought, our approach does not require any specialized training data, can work with small context windows, and can capture types of reasoning that are not easily represented in words. We scale a proof-of-concept model to 3.5 billion parameters and 800 billion tokens. We show that the resulting model can improve its performance on reasoning benchmarks, sometimes dramatically, up to a computation load equivalent to 50 billion parameters.

Summary

AI-Generated Summary

PDF12412February 10, 2025