LONGCODEU:长代码理解任务中的长上下文语言模型基准测试
LONGCODEU: Benchmarking Long-Context Language Models on Long Code Understanding
March 6, 2025
作者: Jia Li, Xuyuan Guo, Lei Li, Kechi Zhang, Ge Li, Jia Li, Zhengwei Tao, Fang Liu, Chongyang Tao, Yuqi Zhu, Zhi Jin
cs.AI
摘要
当前先进的长上下文语言模型为现实世界的软件工程应用提供了巨大潜力。然而,这一关键领域的发展仍受到一个根本性限制的阻碍:缺乏针对长代码理解的严格评估框架。为填补这一空白,我们提出了一个长代码理解基准LONGCODEU,从四个方面(8项任务)评估长上下文语言模型在实际应用中所需的长代码理解能力,包括代码单元感知、代码单元内部理解、代码单元间关系理解以及长代码文档理解。我们在LONGCODEU上评估了9种流行的长上下文语言模型(即6种通用模型和3种代码模型)。实验结果表明,当前长上下文语言模型在长代码理解能力上存在关键局限。特别是,当长代码长度超过32K时,这些模型的性能急剧下降,远未达到其宣称的128K-1M上下文窗口。在四个方面中,代码单元间关系理解对长上下文语言模型最具挑战性。我们的研究为优化长上下文语言模型和推动软件工程进步提供了宝贵见解。
English
Current advanced long-context language models offer great potential for
real-world software engineering applications. However, progress in this
critical domain remains hampered by a fundamental limitation: the absence of a
rigorous evaluation framework for long code understanding. To gap this
obstacle, we propose a long code understanding benchmark LONGCODEU from four
aspects (8 tasks) to evaluate LCLMs' long code understanding ability required
for practical applications, including code unit perception, intra-code unit
understanding, inter-code unit relation understanding, and long code
documentation understanding. We evaluate 9 popular LCLMs on LONGCODEU (i.e., 6
general models and 3 code models). Our experimental results reveal key
limitations in current LCLMs' capabilities for long code understanding.
Particularly, the performance of LCLMs drops dramatically when the long code
length is greater than 32K, falling far short of their claimed 128K-1M context
windows. In the four aspects, inter-code unit relation understanding is the
most challenging for LCLMs. Our study provides valuable insights for optimizing
LCLMs and driving advancements in software engineering.Summary
AI-Generated Summary