OpenCodeReasoning:推动数据蒸馏技术在编程竞赛中的创新应用
OpenCodeReasoning: Advancing Data Distillation for Competitive Coding
April 2, 2025
作者: Wasi Uddin Ahmad, Sean Narenthiran, Somshubra Majumdar, Aleksander Ficek, Siddhartha Jain, Jocelyn Huang, Vahid Noroozi, Boris Ginsburg
cs.AI
摘要
自基于推理的大型语言模型问世以来,许多研究通过将推理能力蒸馏到学生模型中取得了显著成功。此类技术极大地缩小了推理模型与标准大语言模型在编码任务上的差距。尽管如此,推理模型蒸馏的许多进展仍受限于专有数据集,或缺乏关于数据整理、筛选及后续训练的详细说明。为解决这一问题,我们构建了一个卓越的监督微调(SFT)数据集,并利用它在不同规模的模型中实现了最先进的编码能力。我们的蒸馏模型仅通过SFT就在LiveCodeBench上达到了61.8%的准确率,在CodeContests上达到了24.6%,超越了采用强化学习训练的替代方案。随后,我们对构建数据集所用的数据源、代码执行筛选的影响以及指令/解决方案多样性的重要性进行了分析。我们发现,执行筛选对基准准确性产生了负面影响,这促使我们优先考虑指令多样性而非解决方案的正确性。最后,我们还分析了这些模型在token效率和推理模式上的表现。我们将向社区开源这些数据集和蒸馏模型。
English
Since the advent of reasoning-based large language models, many have found
great success from distilling reasoning capabilities into student models. Such
techniques have significantly bridged the gap between reasoning and standard
LLMs on coding tasks. Despite this, much of the progress on distilling
reasoning models remains locked behind proprietary datasets or lacks details on
data curation, filtering and subsequent training. To address this, we construct
a superior supervised fine-tuning (SFT) dataset that we use to achieve
state-of-the-art coding capability results in models of various sizes. Our
distilled models use only SFT to achieve 61.8% on LiveCodeBench and 24.6% on
CodeContests, surpassing alternatives trained with reinforcement learning. We
then perform analysis on the data sources used to construct our dataset, the
impact of code execution filtering, and the importance of instruction/solution
diversity. We observe that execution filtering negatively affected benchmark
accuracy, leading us to prioritize instruction diversity over solution
correctness. Finally, we also analyze the token efficiency and reasoning
patterns utilized by these models. We will open-source these datasets and
distilled models to the community.Summary
AI-Generated Summary