专用反馈与编辑模型助力开放式通用领域任务的推理时扩展
Dedicated Feedback and Edit Models Empower Inference-Time Scaling for Open-Ended General-Domain Tasks
March 6, 2025
作者: Zhilin Wang, Jiaqi Zeng, Olivier Delalleau, Daniel Egert, Ellie Evans, Hoo-Chang Shin, Felipe Soares, Yi Dong, Oleksii Kuchaiev
cs.AI
摘要
推理时扩展技术对于近期模型如OpenAI o1和DeepSeek R1的成功至关重要。然而,许多用于训练模型以支持推理时扩展的技术要求任务答案可被验证,这限制了其在数学、编程和逻辑推理等领域的应用。我们借鉴了人类在广泛开放式探索中如何做出初次尝试、向他人寻求详细反馈并基于此类反馈进行改进的方式。为此,我们收集数据并训练专门的反馈与编辑模型,这些模型能够针对开放式通用任务执行推理时扩展。在我们的设置中,一个模型生成初始响应,第二个模型提供反馈,随后第三个模型利用这些反馈来编辑响应。我们展示了通过增加初始响应草稿、有效反馈和编辑响应的数量,可以提升在Arena Hard基准上的表现,该基准对Chatbot Arena Elo有很强的预测性。当优化扩展时,基于Llama 3系列70B模型的设置能在2025年3月5日达到Arena Hard上的92.7分最新技术水平,超越了OpenAI o1-preview-2024-09-12的90.4分和DeepSeek R1的92.3分。
English
Inference-Time Scaling has been critical to the success of recent models such
as OpenAI o1 and DeepSeek R1. However, many techniques used to train models for
inference-time scaling require tasks to have answers that can be verified,
limiting their application to domains such as math, coding and logical
reasoning. We take inspiration from how humans make first attempts, ask for
detailed feedback from others and make improvements based on such feedback
across a wide spectrum of open-ended endeavors. To this end, we collect data
for and train dedicated Feedback and Edit Models that are capable of performing
inference-time scaling for open-ended general-domain tasks. In our setup, one
model generates an initial response, which are given feedback by a second
model, that are then used by a third model to edit the response. We show that
performance on Arena Hard, a benchmark strongly predictive of Chatbot Arena Elo
can be boosted by scaling the number of initial response drafts, effective
feedback and edited responses. When scaled optimally, our setup based on 70B
models from the Llama 3 family can reach SoTA performance on Arena Hard at 92.7
as of 5 Mar 2025, surpassing OpenAI o1-preview-2024-09-12 with 90.4 and
DeepSeek R1 with 92.3.Summary
AI-Generated Summary