通过潜在偏好优化的自适应解码
Adaptive Decoding via Latent Preference Optimization
November 14, 2024
作者: Shehzaad Dhuliawala, Ilia Kulikov, Ping Yu, Asli Celikyilmaz, Jason Weston, Sainbayar Sukhbaatar, Jack Lanchantin
cs.AI
摘要
在语言模型解码过程中,使用较高温度采样会产生更具创造性的回应,而较低温度则更加准确。然而,这些模型通常应用于通用指令遵循,涉及创造性和事实寻求任务,使用单一固定温度跨所有示例和标记。在这项工作中,我们引入自适应解码,这是一种添加到模型中的层,用于在推断时动态选择采样温度,可以在标记或示例级别进行,以优化性能。为了学习其参数,我们引入了潜在偏好优化(LPO),这是一种训练离散潜在变量(如温度选择)的通用方法。我们的方法在需要不同温度的一系列任务中胜过所有固定解码温度,包括UltraFeedback、创意故事写作和GSM8K。
English
During language model decoding, it is known that using higher temperature
sampling gives more creative responses, while lower temperatures are more
factually accurate. However, such models are commonly applied to general
instruction following, which involves both creative and fact seeking tasks,
using a single fixed temperature across all examples and tokens. In this work,
we introduce Adaptive Decoding, a layer added to the model to select the
sampling temperature dynamically at inference time, at either the token or
example level, in order to optimize performance. To learn its parameters we
introduce Latent Preference Optimization (LPO) a general approach to train
discrete latent variables such as choices of temperature. Our method
outperforms all fixed decoding temperatures across a range of tasks that
require different temperatures, including UltraFeedback, Creative Story
Writing, and GSM8K.Summary
AI-Generated Summary