Qwen2.5 技术报告
Qwen2.5 Technical Report
December 19, 2024
作者: Qwen, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zihan Qiu
cs.AI
摘要
在本报告中,我们介绍了Qwen2.5,这是一系列全面的大型语言模型(LLMs),旨在满足各种需求。与以往版本相比,Qwen 2.5在预训练和后训练阶段都有了显著改进。在预训练方面,我们将高质量的预训练数据集从之前的7万亿标记扩展到了18万亿标记。这为常识、专家知识和推理能力提供了坚实基础。在后训练方面,我们实施了复杂的有监督微调,涵盖超过100万个样本,以及多阶段的强化学习。后训练技术增强了人类偏好,并显著改善了长文本生成、结构化数据分析和指令遵循。为有效处理多样化和各种用例,我们以丰富的规模呈现了Qwen2.5 LLM系列。开放权重产品包括基础模型和指令调整模型,同时提供量化版本。此外,对于托管解决方案,专有模型目前包括两种专家混合(MoE)变体:Qwen2.5-Turbo和Qwen2.5-Plus,均可从阿里云模型工作室获取。Qwen2.5在评估语言理解、推理、数学、编码、人类偏好对齐等各种基准测试中展现了顶尖性能。具体而言,开放权重旗舰产品Qwen2.5-72B-Instruct胜过了许多开放和专有模型,并展示了与最先进的开放权重模型Llama-3-405B-Instruct竞争性能相当,后者大约大5倍。Qwen2.5-Turbo和Qwen2.5-Plus在性价比上表现出色,同时与GPT-4o-mini和GPT-4o保持竞争力。此外,作为基础,Qwen2.5模型对训练专门模型如Qwen2.5-Math、Qwen2.5-Coder、QwQ和多模态模型起到了关键作用。
English
In this report, we introduce Qwen2.5, a comprehensive series of large
language models (LLMs) designed to meet diverse needs. Compared to previous
iterations, Qwen 2.5 has been significantly improved during both the
pre-training and post-training stages. In terms of pre-training, we have scaled
the high-quality pre-training datasets from the previous 7 trillion tokens to
18 trillion tokens. This provides a strong foundation for common sense, expert
knowledge, and reasoning capabilities. In terms of post-training, we implement
intricate supervised finetuning with over 1 million samples, as well as
multistage reinforcement learning. Post-training techniques enhance human
preference, and notably improve long text generation, structural data analysis,
and instruction following. To handle diverse and varied use cases effectively,
we present Qwen2.5 LLM series in rich sizes. Open-weight offerings include base
and instruction-tuned models, with quantized versions available. In addition,
for hosted solutions, the proprietary models currently include two
mixture-of-experts (MoE) variants: Qwen2.5-Turbo and Qwen2.5-Plus, both
available from Alibaba Cloud Model Studio. Qwen2.5 has demonstrated top-tier
performance on a wide range of benchmarks evaluating language understanding,
reasoning, mathematics, coding, human preference alignment, etc. Specifically,
the open-weight flagship Qwen2.5-72B-Instruct outperforms a number of open and
proprietary models and demonstrates competitive performance to the
state-of-the-art open-weight model, Llama-3-405B-Instruct, which is around 5
times larger. Qwen2.5-Turbo and Qwen2.5-Plus offer superior cost-effectiveness
while performing competitively against GPT-4o-mini and GPT-4o respectively.
Additionally, as the foundation, Qwen2.5 models have been instrumental in
training specialized models such as Qwen2.5-Math, Qwen2.5-Coder, QwQ, and
multimodal models.Summary
AI-Generated Summary