ChatPaper.aiChatPaper

Swan 和 ArabicMTEB:方言感知、以阿拉伯文為中心、跨語言和跨文化的嵌入模型和基準。

Swan and ArabicMTEB: Dialect-Aware, Arabic-Centric, Cross-Lingual, and Cross-Cultural Embedding Models and Benchmarks

November 2, 2024
作者: Gagan Bhatia, El Moatez Billah Nagoudi, Abdellah El Mekki, Fakhraddin Alwajih, Muhammad Abdul-Mageed
cs.AI

摘要

我們介紹了Swan,一系列以阿拉伯語為中心的嵌入式模型,涵蓋小規模和大規模應用案例。Swan包括兩個變體:基於ARBERTv2的Swan-Small和基於預訓練阿拉伯語大型語言模型ArMistral的Swan-Large。為了評估這些模型,我們提出了ArabicMTEB,這是一個全面的基準套件,評估跨語言、多方言、多領域和多文化阿拉伯文本嵌入效能,涵蓋八個不同任務,跨越94個數據集。Swan-Large取得了最先進的結果,在大多數阿拉伯語任務中勝過Multilingual-E5-large,而Swan-Small則持續優於Multilingual-E5 base。我們的廣泛評估表明,Swan模型在方言和文化上都很敏感,在各種阿拉伯領域表現卓越,同時提供顯著的效益。這項工作顯著推動了阿拉伯語言建模領域,為未來在阿拉伯自然語言處理方面的研究和應用提供了寶貴資源。我們的模型和基準將對研究公開。
English
We introduce Swan, a family of embedding models centred around the Arabic language, addressing both small-scale and large-scale use cases. Swan includes two variants: Swan-Small, based on ARBERTv2, and Swan-Large, built on ArMistral, a pretrained Arabic large language model. To evaluate these models, we propose ArabicMTEB, a comprehensive benchmark suite that assesses cross-lingual, multi-dialectal, multi-domain, and multi-cultural Arabic text embedding performance, covering eight diverse tasks and spanning 94 datasets. Swan-Large achieves state-of-the-art results, outperforming Multilingual-E5-large in most Arabic tasks, while the Swan-Small consistently surpasses Multilingual-E5 base. Our extensive evaluations demonstrate that Swan models are both dialectally and culturally aware, excelling across various Arabic domains while offering significant monetary efficiency. This work significantly advances the field of Arabic language modelling and provides valuable resources for future research and applications in Arabic natural language processing. Our models and benchmark will be made publicly accessible for research.

Summary

AI-Generated Summary

PDF32November 13, 2024