Qwen2.5 技術報告

Qwen2.5 Technical Report

December 19, 2024
作者: Qwen, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, Zihan Qiu
cs.AI

摘要

在本報告中,我們介紹了 Qwen2.5,這是一系列全面的大型語言模型(LLMs),旨在滿足各種需求。與以往版本相比,Qwen 2.5 在預訓練和後訓練階段均有顯著改進。在預訓練方面,我們將高質量的預訓練數據集從之前的 7 兆令牌擴展到 18 兆令牌。這為常識、專業知識和推理能力提供了堅實基礎。在後訓練方面,我們實施了複雜的監督微調,擁有超過 100 萬個樣本,以及多階段強化學習。後訓練技術增強了人類偏好,顯著改善了長文本生成、結構數據分析和指令遵循。為了有效應對各種不同的使用情況,我們提供了豐富尺寸的 Qwen2.5 LLM 系列。開放式權重產品包括基礎模型和指令調整模型,還提供了量子化版本。此外,對於托管解決方案,專有模型目前包括兩種專家混合(MoE)變體:Qwen2.5-Turbo 和 Qwen2.5-Plus,均可從阿里雲模型工作室獲得。Qwen2.5 在評估語言理解、推理、數學、編碼、人類偏好對齊等各種基準測試中展示了頂尖性能。具體而言,開放式權重旗艦型號 Qwen2.5-72B-Instruct 優於許多開放和專有模型,並展示了與最先進的開放式權重模型 Llama-3-405B-Instruct 競爭性性能,後者大約大 5 倍。Qwen2.5-Turbo 和 Qwen2.5-Plus 在性價比上表現優異,同時與 GPT-4o-mini 和 GPT-4o 分別競爭。此外,作為基礎,Qwen2.5 模型在訓練專門模型(如 Qwen2.5-Math、Qwen2.5-Coder、QwQ 和多模型)方面發揮了重要作用。
English
In this report, we introduce Qwen2.5, a comprehensive series of large language models (LLMs) designed to meet diverse needs. Compared to previous iterations, Qwen 2.5 has been significantly improved during both the pre-training and post-training stages. In terms of pre-training, we have scaled the high-quality pre-training datasets from the previous 7 trillion tokens to 18 trillion tokens. This provides a strong foundation for common sense, expert knowledge, and reasoning capabilities. In terms of post-training, we implement intricate supervised finetuning with over 1 million samples, as well as multistage reinforcement learning. Post-training techniques enhance human preference, and notably improve long text generation, structural data analysis, and instruction following. To handle diverse and varied use cases effectively, we present Qwen2.5 LLM series in rich sizes. Open-weight offerings include base and instruction-tuned models, with quantized versions available. In addition, for hosted solutions, the proprietary models currently include two mixture-of-experts (MoE) variants: Qwen2.5-Turbo and Qwen2.5-Plus, both available from Alibaba Cloud Model Studio. Qwen2.5 has demonstrated top-tier performance on a wide range of benchmarks evaluating language understanding, reasoning, mathematics, coding, human preference alignment, etc. Specifically, the open-weight flagship Qwen2.5-72B-Instruct outperforms a number of open and proprietary models and demonstrates competitive performance to the state-of-the-art open-weight model, Llama-3-405B-Instruct, which is around 5 times larger. Qwen2.5-Turbo and Qwen2.5-Plus offer superior cost-effectiveness while performing competitively against GPT-4o-mini and GPT-4o respectively. Additionally, as the foundation, Qwen2.5 models have been instrumental in training specialized models such as Qwen2.5-Math, Qwen2.5-Coder, QwQ, and multimodal models.

Summary

AI-Generated Summary

PDF3409December 20, 2024