更聰明、更好、更快、更長:一種現代的雙向編碼器,用於快速、記憶效率高、長內容微調和推論。
Smarter, Better, Faster, Longer: A Modern Bidirectional Encoder for Fast, Memory Efficient, and Long Context Finetuning and Inference
December 18, 2024
作者: Benjamin Warner, Antoine Chaffin, Benjamin Clavié, Orion Weller, Oskar Hallström, Said Taghadouini, Alexis Gallagher, Raja Biswas, Faisal Ladhak, Tom Aarsen, Nathan Cooper, Griffin Adams, Jeremy Howard, Iacopo Poli
cs.AI
摘要
像BERT這樣僅具編碼器的Transformer模型在檢索和分類任務中提供了極佳的性能與尺寸平衡,相較於較大的僅具解碼器模型。儘管BERT是眾多生產管道的主力,但自推出以來對其進行Pareto改進的空間有限。本文介紹了ModernBERT,將現代模型優化應用於僅具編碼器模型,並在舊編碼器基礎上實現了重大的Pareto改進。ModernBERT在訓練時使用了2兆個標記,原生序列長度為8192,並在包括不同領域(包括代碼)的各種分類任務和單向/多向檢索中展現了最先進的結果。除了出色的下游性能外,ModernBERT也是最快速和記憶體效率最高的編碼器,並且設計用於在常見GPU上進行推斷。
English
Encoder-only transformer models such as BERT offer a great performance-size
tradeoff for retrieval and classification tasks with respect to larger
decoder-only models. Despite being the workhorse of numerous production
pipelines, there have been limited Pareto improvements to BERT since its
release. In this paper, we introduce ModernBERT, bringing modern model
optimizations to encoder-only models and representing a major Pareto
improvement over older encoders. Trained on 2 trillion tokens with a native
8192 sequence length, ModernBERT models exhibit state-of-the-art results on a
large pool of evaluations encompassing diverse classification tasks and both
single and multi-vector retrieval on different domains (including code). In
addition to strong downstream performance, ModernBERT is also the most speed
and memory efficient encoder and is designed for inference on common GPUs.Summary
AI-Generated Summary