ChatPaper.aiChatPaper

提示到排行榜

Prompt-to-Leaderboard

February 20, 2025
作者: Evan Frick, Connor Chen, Joseph Tennyson, Tianle Li, Wei-Lin Chiang, Anastasios N. Angelopoulos, Ion Stoica
cs.AI

摘要

大型语言模型(LLM)的评估通常依赖于准确率或人类偏好等聚合指标,这些指标在用户和提示之间进行平均。这种平均化掩盖了模型性能在用户和提示层面的具体差异。为解决这一问题,我们提出了提示到排行榜(P2L)方法,该方法能生成针对特定提示的排行榜。其核心思想是训练一个LLM,将自然语言提示作为输入,输出一组Bradley-Terry系数,进而用于预测人类偏好投票。由此产生的提示依赖型排行榜支持无监督的任务特定评估、查询到模型的最优路由、个性化以及模型优缺点的自动化评估。Chatbot Arena的数据表明,P2L比平均化排行榜更能捕捉语言模型性能的细微差别。此外,我们的发现表明,P2L生成提示特定评估的能力遵循与LLM自身观察到的幂律缩放相似的模式。2025年1月,基于此方法训练的路由器在Chatbot Arena排行榜上荣登榜首。我们的代码可通过以下GitHub链接获取:https://github.com/lmarena/p2l。
English
Large language model (LLM) evaluations typically rely on aggregated metrics like accuracy or human preference, averaging across users and prompts. This averaging obscures user- and prompt-specific variations in model performance. To address this, we propose Prompt-to-Leaderboard (P2L), a method that produces leaderboards specific to a prompt. The core idea is to train an LLM taking natural language prompts as input to output a vector of Bradley-Terry coefficients which are then used to predict the human preference vote. The resulting prompt-dependent leaderboards allow for unsupervised task-specific evaluation, optimal routing of queries to models, personalization, and automated evaluation of model strengths and weaknesses. Data from Chatbot Arena suggest that P2L better captures the nuanced landscape of language model performance than the averaged leaderboard. Furthermore, our findings suggest that P2L's ability to produce prompt-specific evaluations follows a power law scaling similar to that observed in LLMs themselves. In January 2025, the router we trained based on this methodology achieved the \#1 spot in the Chatbot Arena leaderboard. Our code is available at this GitHub link: https://github.com/lmarena/p2l.

Summary

AI-Generated Summary

PDF73February 26, 2025