令牌混合:混合潜在令牌和文本令牌以提高语言模型推理
Token Assorted: Mixing Latent and Text Tokens for Improved Language Model Reasoning
February 5, 2025
作者: DiJia Su, Hanlin Zhu, Yingchen Xu, Jiantao Jiao, Yuandong Tian, Qinqing Zheng
cs.AI
摘要
大型语言模型(LLMs)在接受链式思维(CoT)数据训练时擅长推理和规划,其中逐步思考过程由文本标记明确概述。然而,这导致输入过长,其中许多词支持文本连贯性而非核心推理信息,处理这些输入需要大量计算资源。在这项工作中,我们提出了一种推理过程的混合表示,部分抽象化初始推理步骤,使用VQ-VAE生成的潜在离散标记,显著减少推理追踪的长度。我们探讨了在两种情况下使用潜在追踪抽象的方法:1)从头开始为Keys-Finding Maze问题训练模型,2)在这种混合数据上对LLMs进行微调,包括未见过的潜在标记在内的扩展词汇,用于逻辑和数学推理问题。为了促进有效学习,我们引入了一个简单的训练过程,随机混合潜在和文本标记,从而实现对新潜在标记的快速适应。我们的方法在各种基准测试中始终优于基准方法。
English
Large Language Models (LLMs) excel at reasoning and planning when trained on
chainof-thought (CoT) data, where the step-by-step thought process is
explicitly outlined by text tokens. However, this results in lengthy inputs
where many words support textual coherence rather than core reasoning
information, and processing these inputs consumes substantial computation
resources. In this work, we propose a hybrid representation of the reasoning
process, where we partially abstract away the initial reasoning steps using
latent discrete tokens generated by VQ-VAE, significantly reducing the length
of reasoning traces. We explore the use of latent trace abstractions in two
scenarios: 1) training the model from scratch for the Keys-Finding Maze
problem, 2) fine-tuning LLMs on this hybrid data with an extended vocabulary
including unseen latent tokens, for both logical and mathematical reasoning
problems. To facilitate effective learning, we introduce a simple training
procedure that randomly mixes latent and text tokens, which enables fast
adaptation to new latent tokens. Our approach consistently outperforms the
baselines methods in various benchmarks.Summary
AI-Generated Summary