EuroBERT:面向欧洲语言的多语言编码器扩展模型
EuroBERT: Scaling Multilingual Encoders for European Languages
March 7, 2025
作者: Nicolas Boizard, Hippolyte Gisserot-Boukhlef, Duarte M. Alves, André Martins, Ayoub Hammal, Caio Corro, Céline Hudelot, Emmanuel Malherbe, Etienne Malaboeuf, Fanny Jourdan, Gabriel Hautreux, João Alves, Kevin El-Haddad, Manuel Faysse, Maxime Peyrard, Nuno M. Guerreiro, Patrick Fernandes, Ricardo Rei, Pierre Colombo
cs.AI
摘要
通用多语言向量表示,广泛应用于检索、回归和分类任务,传统上源自双向编码器模型。尽管其应用广泛,编码器近来在生成式仅解码器模型的进展面前略显失色。然而,推动这一进步的诸多创新并非解码器所独有。本文中,我们借助这些进展的视角,重新审视多语言编码器的发展,并推出EuroBERT系列,这是一组覆盖欧洲及全球广泛使用语言的多语言编码器。我们的模型在包括多语言能力、数学及编程在内的多样化任务中均超越现有替代方案,并原生支持长达8,192个标记的序列。同时,我们深入探讨了EuroBERT背后的设计决策,分享了数据集构建与训练流程的洞见。我们公开了EuroBERT模型,包括中间训练检查点,以及我们的训练框架。
English
General-purpose multilingual vector representations, used in retrieval,
regression and classification, are traditionally obtained from bidirectional
encoder models. Despite their wide applicability, encoders have been recently
overshadowed by advances in generative decoder-only models. However, many
innovations driving this progress are not inherently tied to decoders. In this
paper, we revisit the development of multilingual encoders through the lens of
these advances, and introduce EuroBERT, a family of multilingual encoders
covering European and widely spoken global languages. Our models outperform
existing alternatives across a diverse range of tasks, spanning multilingual
capabilities, mathematics, and coding, and natively supporting sequences of up
to 8,192 tokens. We also examine the design decisions behind EuroBERT, offering
insights into our dataset composition and training pipeline. We publicly
release the EuroBERT models, including intermediate training checkpoints,
together with our training framework.Summary
AI-Generated Summary