ChatPaper.aiChatPaper

自洽性偏好优化

Self-Consistency Preference Optimization

November 6, 2024
作者: Archiki Prasad, Weizhe Yuan, Richard Yuanzhe Pang, Jing Xu, Maryam Fazel-Zarandi, Mohit Bansal, Sainbayar Sukhbaatar, Jason Weston, Jane Yu
cs.AI

摘要

自我对齐是一种模型学习如何在没有人工标注的情况下改进自身的能力,是一个快速发展的研究领域。然而,由于难以确定正确的奖励,现有技术通常无法改进复杂的推理任务。已知一种改进正确性的正交方法是自一致性,在推理时应用多次抽样以找到最一致的答案。在本研究中,我们将自一致性概念扩展到模型训练中。因此,我们引入了自一致性偏好优化(ScPO),通过迭代训练一致的答案优于不一致的答案来解决无监督新问题。我们展示了ScPO在推理任务(如GSM8K和MATH)上相较于传统奖励模型训练取得了巨大改进,缩小了与使用黄金答案或偏好的监督训练之间的差距,并且将ScPO与标准监督学习相结合可以进一步提高结果。在ZebraLogic上,ScPO微调Llama-3 8B,使其优于Llama-3 70B、Gemma-2 27B和Claude-3 Haiku。
English
Self-alignment, whereby models learn to improve themselves without human annotation, is a rapidly growing research area. However, existing techniques often fail to improve complex reasoning tasks due to the difficulty of assigning correct rewards. An orthogonal approach that is known to improve correctness is self-consistency, a method applied at inference time based on multiple sampling in order to find the most consistent answer. In this work, we extend the self-consistency concept to help train models. We thus introduce self-consistency preference optimization (ScPO), which iteratively trains consistent answers to be preferred over inconsistent ones on unsupervised new problems. We show ScPO leads to large improvements over conventional reward model training on reasoning tasks such as GSM8K and MATH, closing the gap with supervised training with gold answers or preferences, and that combining ScPO with standard supervised learning improves results even further. On ZebraLogic, ScPO finetunes Llama-3 8B to be superior to Llama-3 70B, Gemma-2 27B, and Claude-3 Haiku.

Summary

AI-Generated Summary

PDF191November 13, 2024