GPT还是BERT:为什么不两者兼而有之呢?
GPT or BERT: why not both?
October 31, 2024
作者: Lucas Georges Gabriel Charpentier, David Samuel
cs.AI
摘要
我们提出了一种简单的方法,将掩码语言建模与因果语言建模相结合。这种混合训练目标导致了一个模型,它在单个Transformer堆栈中结合了这两种建模范式的优势:GPT-BERT可以像任何标准的因果或掩码语言模型一样透明地使用。我们在2024年的BabyLM挑战赛上测试了实现这种灵活行为的预训练过程。结果显示,混合预训练优于仅掩码或仅因果模型。我们公开发布了模型、训练语料库和代码。
English
We present a simple way to merge masked language modeling with causal
language modeling. This hybrid training objective results in a model that
combines the strengths of both modeling paradigms within a single transformer
stack: GPT-BERT can be transparently used like any standard causal or masked
language model. We test the pretraining process that enables this flexible
behavior on the BabyLM Challenge 2024. The results show that the hybrid
pretraining outperforms masked-only or causal-only models. We openly release
the models, training corpora and code.Summary
AI-Generated Summary