SWE-Lancer:前沿LLM能否从真实世界的自由职业软件工程中赚取100万美元?
SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?
February 17, 2025
作者: Samuel Miserendino, Michele Wang, Tejal Patwardhan, Johannes Heidecke
cs.AI
摘要
我们介绍了SWE-Lancer,这是一个包含超过1,400个来自Upwork的自由软件工程任务的基准,总价值为100万美元。SWE-Lancer涵盖了独立工程任务,范围从50个错误修复到32,000美元的功能实现,以及管理任务,其中模型在技术实现提案之间进行选择。独立任务通过经验丰富的软件工程师三重验证的端到端测试进行评分,而管理决策则根据最初聘请的工程经理的选择进行评估。我们评估模型性能并发现前沿模型仍然无法解决大多数任务。为了促进未来研究,我们开源了一个统一的Docker镜像和一个公共评估分割,SWE-Lancer Diamond(https://github.com/openai/SWELancer-Benchmark)。通过将模型性能映射到货币价值,我们希望SWE-Lancer能够促进对AI模型开发经济影响的更深入研究。
English
We introduce SWE-Lancer, a benchmark of over 1,400 freelance software
engineering tasks from Upwork, valued at \1 million USD total in real-world
payouts. SWE-Lancer encompasses both independent engineering tasks--ranging
from 50 bug fixes to \$32,000 feature implementations--and managerial tasks,
where models choose between technical implementation proposals. Independent
tasks are graded with end-to-end tests triple-verified by experienced software
engineers, while managerial decisions are assessed against the choices of the
original hired engineering managers. We evaluate model performance and find
that frontier models are still unable to solve the majority of tasks. To
facilitate future research, we open-source a unified Docker image and a public
evaluation split, SWE-Lancer Diamond
(https://github.com/openai/SWELancer-Benchmark). By mapping model performance
to monetary value, we hope SWE-Lancer enables greater research into the
economic impact of AI model development.Summary
AI-Generated Summary