ChatPaper.aiChatPaper

B4:朝向使用合理測試來評估合理程式碼解決方案的最佳方法

B4: Towards Optimal Assessment of Plausible Code Solutions with Plausible Tests

September 13, 2024
作者: Mouxiang Chen, Zhongxin Liu, He Tao, Yusu Hong, David Lo, Xin Xia, Jianling Sun
cs.AI

摘要

在程式碼生成中,從多個生成的解決方案中選擇最佳程式碼解決方案是一項重要任務,可以通過使用一些可靠的驗證器(例如,由開發人員編寫的測試用例)來實現。由於可靠的測試用例並非總是可用且在實踐中構建成本高昂,研究人員提出自動生成測試用例來評估程式碼解決方案。然而,當程式碼解決方案和測試用例都是合理的但不可靠時,選擇最佳解決方案變得具有挑戰性。儘管提出了一些啟發式策略來應對這個問題,但它們缺乏強大的理論保證,並且是否存在最佳選擇策略仍然是一個懸而未決的問題。我們的工作在兩個方面做出貢獻。首先,我們展示在貝葉斯框架內,基於觀察到的解決方案和測試之間通過狀態的後驗概率可以定義最佳選擇策略。然後,識別最佳解決方案的問題被構建為一個整數規劃問題。其次,我們提出了一種有效的方法來逼近這種最佳(但無法計算)策略,其中逼近誤差受先前知識的正確性限制。然後,我們將有效的先前知識納入以定制程式碼生成任務。理論和實證研究證實現有的啟發式方法在選擇具有合理測試用例的最佳解決方案方面存在局限性。我們提出的近似最佳策略 B4 在選擇由大型語言模型(LLMs)生成的程式碼解決方案時明顯優於現有的啟發式方法,實現了相對性能提升高達 50%,比最強啟發式方法高出 246%,在最具挑戰性的情況下超過隨機選擇。我們的程式碼可在 https://github.com/ZJU-CTAG/B4 上公開獲取。
English
Selecting the best code solution from multiple generated ones is an essential task in code generation, which can be achieved by using some reliable validators (e.g., developer-written test cases) for assistance. Since reliable test cases are not always available and can be expensive to build in practice, researchers propose to automatically generate test cases to assess code solutions. However, when both code solutions and test cases are plausible and not reliable, selecting the best solution becomes challenging. Although some heuristic strategies have been proposed to tackle this problem, they lack a strong theoretical guarantee and it is still an open question whether an optimal selection strategy exists. Our work contributes in two ways. First, we show that within a Bayesian framework, the optimal selection strategy can be defined based on the posterior probability of the observed passing states between solutions and tests. The problem of identifying the best solution is then framed as an integer programming problem. Second, we propose an efficient approach for approximating this optimal (yet uncomputable) strategy, where the approximation error is bounded by the correctness of prior knowledge. We then incorporate effective prior knowledge to tailor code generation tasks. Both theoretical and empirical studies confirm that existing heuristics are limited in selecting the best solutions with plausible test cases. Our proposed approximated optimal strategy B4 significantly surpasses existing heuristics in selecting code solutions generated by large language models (LLMs) with LLM-generated tests, achieving a relative performance improvement by up to 50% over the strongest heuristic and 246% over the random selection in the most challenging scenarios. Our code is publicly available at https://github.com/ZJU-CTAG/B4.

Summary

AI-Generated Summary

PDF282November 16, 2024