语言模型的事实准确性取决于查询所使用的语言
Language Models' Factuality Depends on the Language of Inquiry
February 25, 2025
作者: Tushar Aggarwal, Kumar Tanmay, Ayush Agrawal, Kumar Ayush, Hamid Palangi, Paul Pu Liang
cs.AI
摘要
多语言语言模型(LMs)被期望能够在不同语言间一致地回忆事实知识,然而它们往往无法在语言间有效传递知识,即便在某一语言中已掌握正确信息。例如,我们发现,当用阿拉伯语询问时,一个LM可能正确识别出Rashed Al Shashai来自沙特阿拉伯,但在用英语或斯瓦希里语询问时却屡屡失败。为了系统性地探究这一局限,我们引入了一个包含13种语言的10,000条国家相关事实的基准,并提出了三个新颖的度量标准:事实回忆分数、知识可转移性分数及跨语言事实知识可转移性分数,用以量化LMs在不同语言间的事实回忆与知识转移能力。我们的研究结果揭示了当前最先进LMs的根本弱点,尤其是在跨语言泛化方面,模型未能有效地在不同语言间传递知识,导致其表现因所用语言而异,缺乏一致性。这些发现强调了LMs需识别语言特定的事实可靠性,并跨语言利用最可信信息的重要性。我们公开了我们的基准与评估框架,以推动未来在多语言知识转移领域的研究。
English
Multilingual language models (LMs) are expected to recall factual knowledge
consistently across languages, yet they often fail to transfer knowledge
between languages even when they possess the correct information in one of the
languages. For example, we find that an LM may correctly identify Rashed Al
Shashai as being from Saudi Arabia when asked in Arabic, but consistently fails
to do so when asked in English or Swahili. To systematically investigate this
limitation, we introduce a benchmark of 10,000 country-related facts across 13
languages and propose three novel metrics: Factual Recall Score, Knowledge
Transferability Score, and Cross-Lingual Factual Knowledge Transferability
Score-to quantify factual recall and knowledge transferability in LMs across
different languages. Our results reveal fundamental weaknesses in today's
state-of-the-art LMs, particularly in cross-lingual generalization where models
fail to transfer knowledge effectively across different languages, leading to
inconsistent performance sensitive to the language used. Our findings emphasize
the need for LMs to recognize language-specific factual reliability and
leverage the most trustworthy information across languages. We release our
benchmark and evaluation framework to drive future research in multilingual
knowledge transfer.Summary
AI-Generated Summary