ChatPaper.aiChatPaper

Trillion 7B 技術報告

Trillion 7B Technical Report

April 21, 2025
作者: Sungjun Han, Juyoung Suk, Suyeong An, Hyungguk Kim, Kyuseok Kim, Wonsuk Yang, Seungtaek Choi, Jamin Shin
cs.AI

摘要

我們推出Trillion-7B,這是一款最具代幣效率的以韓語為核心的多語言大規模語言模型(LLM)。我們新穎的跨語言文件注意力機制(XLDA)實現了從英語到目標語言(如韓語和日語)的高效且有效的知識轉移。結合優化的數據混合、特定語言的過濾以及量身定制的分詞器構建,Trillion-7B在僅將其2T訓練代幣中的10%用於多語言數據,且僅需59.4K H100 GPU小時(約14.8萬美元)完成完整訓練的情況下,達到了具有競爭力的性能。在四種語言的27個基準測試中的全面評估,展示了Trillion-7B強大的多語言性能和卓越的跨語言一致性。
English
We introduce Trillion-7B, the most token-efficient Korean-centric multilingual LLM available. Our novel Cross-lingual Document Attention (XLDA) mechanism enables highly efficient and effective knowledge transfer from English to target languages like Korean and Japanese. Combined with optimized data mixtures, language-specific filtering, and tailored tokenizer construction, Trillion-7B achieves competitive performance while dedicating only 10\% of its 2T training tokens to multilingual data and requiring just 59.4K H100 GPU hours (\$148K) for full training. Comprehensive evaluations across 27 benchmarks in four languages demonstrate Trillion-7B's robust multilingual performance and exceptional cross-lingual consistency.

Summary

AI-Generated Summary

PDF332April 24, 2025