ChatPaper.aiChatPaper

一对一的RM:使用淘汰赛锦标赛进行最佳N抽样

Pairwise RM: Perform Best-of-N Sampling with Knockout Tournament

January 22, 2025
作者: Yantao Liu, Zijun Yao, Rui Min, Yixin Cao, Lei Hou, Juanzi Li
cs.AI

摘要

最佳-N(BoN)抽样是大型语言模型(LLMs)测试时间缩放的常见策略,依赖奖励模型从多个生成中选择最佳候选解决方案。然而,传统奖励模型通常分配任意和不一致的分数,限制了它们的有效性。为了解决这个问题,我们提出了一种配对奖励模型(Pairwise RM),结合淘汰赛用于BoN抽样。Pairwise RM不是分配绝对分数,而是在给定一个数学问题时同时评估两个候选解决方案的正确性。这种方法消除了任意评分的需要,并通过并行比较实现了解决方案的交叉验证。在淘汰赛中,Pairwise RM在候选解决方案之间进行成对比较,并迭代性地淘汰错误的解决方案。我们构建了一个包含443K个成对比较的大规模数据集\ourdataset,这些数据来自NumiaMath,并使用gemini-1.5-flash进行注释,然后通过监督微调训练Pairwise RM。在MATH-500和奥林匹克基准上的实验表明,相对于传统的判别奖励模型,取得了显著的改进。在前50%具有挑战性的问题上实现了40%至60%的相对改进。
English
Best-of-N (BoN) sampling, a common strategy for test-time scaling of Large Language Models (LLMs), relies on reward models to select the best candidate solution from multiple generations. However, traditional reward models often assign arbitrary and inconsistent scores, limiting their effectiveness. To address this, we propose a Pairwise Reward Model (Pairwise RM) combined with a knockout tournament for BoN sampling. Instead of assigning absolute scores, given one math problem, Pairwise RM evaluates two candidate solutions' correctness simultaneously. This approach eliminates the need for arbitrary scoring and enables cross-validation of solutions through parallel comparison. In the knockout tournament, Pairwise RM conducts pairwise comparisons between candidate solutions and eliminates the incorrect ones iteratively. We construct \ourdataset, a large-scale dataset of 443K pairwise comparisons derived from NumiaMath and annotated using gemini-1.5-flash, and train the Pairwise RM via supervised fine-tuning. Experiments on MATH-500 and the Olympiad Bench demonstrate significant improvements over traditional discriminative reward models. And a 40\% to 60\% relative improvement is achieved on the top 50\% challenging problems.

Summary

AI-Generated Summary

PDF203January 23, 2025