ChatPaper.aiChatPaper

天鹅和ArabicMTEB:方言感知、以阿拉伯语为中心、跨语言和跨文化的嵌入模型和基准。

Swan and ArabicMTEB: Dialect-Aware, Arabic-Centric, Cross-Lingual, and Cross-Cultural Embedding Models and Benchmarks

November 2, 2024
作者: Gagan Bhatia, El Moatez Billah Nagoudi, Abdellah El Mekki, Fakhraddin Alwajih, Muhammad Abdul-Mageed
cs.AI

摘要

我们介绍了Swan,这是一个围绕阿拉伯语言的嵌入模型系列,旨在解决小规模和大规模用例。Swan包括两个变体:Swan-Small,基于ARBERTv2,以及Swan-Large,基于预训练的阿拉伯大型语言模型ArMistral。为了评估这些模型,我们提出了ArabicMTEB,这是一个全面的基准套件,评估跨语言、多方言、多领域和多文化阿拉伯文本嵌入性能,涵盖了八个不同的任务,涉及94个数据集。Swan-Large取得了最先进的结果,在大多数阿拉伯任务中优于Multilingual-E5-large,而Swan-Small则始终优于Multilingual-E5 base。我们的广泛评估表明,Swan模型在方言和文化上都具有意识,在各种阿拉伯领域表现出色,同时提供了显著的经济效率。这项工作在阿拉伯语言建模领域取得了重大进展,并为未来在阿拉伯自然语言处理领域的研究和应用提供了宝贵资源。我们的模型和基准将对研究公开可访问。
English
We introduce Swan, a family of embedding models centred around the Arabic language, addressing both small-scale and large-scale use cases. Swan includes two variants: Swan-Small, based on ARBERTv2, and Swan-Large, built on ArMistral, a pretrained Arabic large language model. To evaluate these models, we propose ArabicMTEB, a comprehensive benchmark suite that assesses cross-lingual, multi-dialectal, multi-domain, and multi-cultural Arabic text embedding performance, covering eight diverse tasks and spanning 94 datasets. Swan-Large achieves state-of-the-art results, outperforming Multilingual-E5-large in most Arabic tasks, while the Swan-Small consistently surpasses Multilingual-E5 base. Our extensive evaluations demonstrate that Swan models are both dialectally and culturally aware, excelling across various Arabic domains while offering significant monetary efficiency. This work significantly advances the field of Arabic language modelling and provides valuable resources for future research and applications in Arabic natural language processing. Our models and benchmark will be made publicly accessible for research.

Summary

AI-Generated Summary

PDF32November 13, 2024