ChatPaper.aiChatPaper

S*:代码生成中的测试时间缩放

S*: Test Time Scaling for Code Generation

February 20, 2025
作者: Dacheng Li, Shiyi Cao, Chengkun Cao, Xiuyu Li, Shangyin Tan, Kurt Keutzer, Jiarong Xing, Joseph E. Gonzalez, Ion Stoica
cs.AI

摘要

增加大语言模型(LLM)在测试时的计算资源,已在多个领域展现出潜力,但在代码生成方面却仍待深入探索,尽管在数学领域已有广泛研究。本文提出S*,首个混合测试时扩展框架,显著提升了生成代码的覆盖范围与选择准确性。S*在现有并行扩展范式基础上,引入顺序扩展,以突破性能极限。此外,它采用了一种新颖的选择机制,自适应地生成用于成对比较的区分性输入,并结合执行基础信息,以稳健识别正确解决方案。我们在12个大语言模型和大推理模型上进行了评估,结果表明:(1)S*持续提升不同模型家族及规模的性能,使一个3B模型超越GPT-4o-mini;(2)S*使非推理模型超越推理模型——搭载S*的GPT-4o-mini在LiveCodeBench上比o1-preview高出3.7%;(3)S*进一步提升了顶尖推理模型的表现——结合S*的DeepSeek-R1-Distill-Qwen-32B在LiveCodeBench上达到85.7%,接近o1(高)的88.5%。代码将发布于https://github.com/NovaSky-AI/SkyThought。
English
Increasing test-time compute for LLMs shows promise across domains but remains underexplored in code generation, despite extensive study in math. In this paper, we propose S*, the first hybrid test-time scaling framework that substantially improves the coverage and selection accuracy of generated code. S* extends the existing parallel scaling paradigm with sequential scaling to push performance boundaries. It further leverages a novel selection mechanism that adaptively generates distinguishing inputs for pairwise comparison, combined with execution-grounded information to robustly identify correct solutions. We evaluate across 12 Large Language Models and Large Reasoning Model and show: (1) S* consistently improves performance across model families and sizes, enabling a 3B model to outperform GPT-4o-mini; (2) S* enables non-reasoning models to surpass reasoning models - GPT-4o-mini with S* outperforms o1-preview by 3.7% on LiveCodeBench; (3) S* further boosts state-of-the-art reasoning models - DeepSeek-R1-Distill-Qwen-32B with S* achieves 85.7% on LiveCodeBench, approaching o1 (high) at 88.5%. Code will be available under https://github.com/NovaSky-AI/SkyThought.

Summary

AI-Generated Summary

PDF603February 21, 2025