Kanana:计算高效的双语语言模型
Kanana: Compute-efficient Bilingual Language Models
February 26, 2025
作者: Kanana LLM Team, Yunju Bak, Hojin Lee, Minho Ryu, Jiyeon Ham, Seungjae Jung, Daniel Wontae Nam, Taegyeong Eo, Donghun Lee, Doohae Jung, Boseop Kim, Nayeon Kim, Jaesun Park, Hyunho Kim, Hyunwoong Ko, Changmin Lee, Kyoung-Woon On, Seulye Baeg, Junrae Cho, Sunghee Jung, Jieun Kang, EungGyun Kim, Eunhwa Kim, Byeongil Ko, Daniel Lee, Minchul Lee, Miok Lee, Shinbok Lee, Gaeun Seo
cs.AI
摘要
我们推出Kanana系列双语语言模型,其在韩语表现上超越同类,在英语表现上具有竞争力。与同等规模的顶尖模型相比,Kanana的计算成本显著降低。本报告详细阐述了在预训练阶段采用的技术,以实现计算高效且性能优异的模型,包括高质量数据过滤、分阶段预训练、深度扩展以及剪枝与蒸馏。此外,报告还概述了Kanana模型在训练后阶段所采用的方法,涵盖有监督微调和偏好优化,旨在提升其与用户无缝交互的能力。最后,报告深入探讨了语言模型适应特定场景的可行方法,如嵌入、检索增强生成和函数调用。Kanana模型系列参数规模从21亿到325亿不等,其中21亿参数模型(基础版、指令版、嵌入版)已公开发布,以促进韩语语言模型的研究。
English
We introduce Kanana, a series of bilingual language models that demonstrate
exceeding performance in Korean and competitive performance in English. The
computational cost of Kanana is significantly lower than that of
state-of-the-art models of similar size. The report details the techniques
employed during pre-training to achieve compute-efficient yet competitive
models, including high quality data filtering, staged pre-training, depth
up-scaling, and pruning and distillation. Furthermore, the report outlines the
methodologies utilized during the post-training of the Kanana models,
encompassing supervised fine-tuning and preference optimization, aimed at
enhancing their capability for seamless interaction with users. Lastly, the
report elaborates on plausible approaches used for language model adaptation to
specific scenarios, such as embedding, retrieval augmented generation, and
function calling. The Kanana model series spans from 2.1B to 32.5B parameters
with 2.1B models (base, instruct, embedding) publicly released to promote
research on Korean language models.Summary
AI-Generated Summary