全球MMLU:理解和解決多語言評估中的文化和語言偏見
Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation
December 4, 2024
作者: Shivalika Singh, Angelika Romanou, Clémentine Fourrier, David I. Adelani, Jian Gang Ngui, Daniel Vila-Suero, Peerat Limkonchotiwat, Kelly Marchisio, Wei Qi Leong, Yosephine Susanto, Raymond Ng, Shayne Longpre, Wei-Yin Ko, Madeline Smith, Antoine Bosselut, Alice Oh, Andre F. T. Martins, Leshem Choshen, Daphne Ippolito, Enzo Ferrante, Marzieh Fadaee, Beyza Ermis, Sara Hooker
cs.AI
摘要
多語言資料集中的文化偏見對其作為全球基準的效力構成重大挑戰。這些偏見不僅來自語言,還來自解釋問題所需的文化知識,降低了像MMLU這樣翻譯資料集的實際效用。此外,翻譯通常會引入可能扭曲目標語言中問題的含義或清晰度的人為因素。在多語言評估中的一個常見做法是依賴機器翻譯的評估集,但僅僅翻譯資料集是不足以應對這些挑戰的。在這項工作中,我們追蹤這些問題對多語言評估和隨之而來的模型表現的影響。我們對最先進的開放式和專有模型進行的大規模評估顯示,對MMLU的進展在很大程度上取決於學習西方中心概念,其中28%的所有問題需要具有文化敏感知識。此外,對於需要地理知識的問題,驚人的84.9%集中在北美或歐洲地區。模型評估的排名會根據是在全部問題還是在被標記為文化敏感的子集上進行評估而改變,這顯示了在盲目依賴翻譯的MMLU時對模型排名的扭曲。我們發布了Global-MMLU,這是一個改進的MMLU,涵蓋了42種語言的評估範圍--通過與受薪專業和社區標註者合作驗證翻譯質量,同時嚴格評估原始資料集中存在的文化偏見,從而提高了整體質量。這個全面的Global-MMLU集還包括被標記為文化敏感和文化不可知的指定子集,以便進行更全面、完整的評估。
English
Cultural biases in multilingual datasets pose significant challenges for
their effectiveness as global benchmarks. These biases stem not only from
language but also from the cultural knowledge required to interpret questions,
reducing the practical utility of translated datasets like MMLU. Furthermore,
translation often introduces artifacts that can distort the meaning or clarity
of questions in the target language. A common practice in multilingual
evaluation is to rely on machine-translated evaluation sets, but simply
translating a dataset is insufficient to address these challenges. In this
work, we trace the impact of both of these issues on multilingual evaluations
and ensuing model performances. Our large-scale evaluation of state-of-the-art
open and proprietary models illustrates that progress on MMLU depends heavily
on learning Western-centric concepts, with 28% of all questions requiring
culturally sensitive knowledge. Moreover, for questions requiring geographic
knowledge, an astounding 84.9% focus on either North American or European
regions. Rankings of model evaluations change depending on whether they are
evaluated on the full portion or the subset of questions annotated as
culturally sensitive, showing the distortion to model rankings when blindly
relying on translated MMLU. We release Global-MMLU, an improved MMLU with
evaluation coverage across 42 languages -- with improved overall quality by
engaging with compensated professional and community annotators to verify
translation quality while also rigorously evaluating cultural biases present in
the original dataset. This comprehensive Global-MMLU set also includes
designated subsets labeled as culturally sensitive and culturally agnostic to
allow for more holistic, complete evaluation.Summary
AI-Generated Summary