ChatPaper.aiChatPaper

Phi-4-Mini技术报告:通过混合LoRA实现紧凑而强大的多模态语言模型

Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs

March 3, 2025
作者: Abdelrahman Abouelenin, Atabak Ashfaq, Adam Atkinson, Hany Awadalla, Nguyen Bach, Jianmin Bao, Alon Benhaim, Martin Cai, Vishrav Chaudhary, Congcong Chen, Dong Chen, Dongdong Chen, Junkun Chen, Weizhu Chen, Yen-Chun Chen, Yi-ling Chen, Qi Dai, Xiyang Dai, Ruchao Fan, Mei Gao, Min Gao, Amit Garg, Abhishek Goswami, Junheng Hao, Amr Hendy, Yuxuan Hu, Xin Jin, Mahmoud Khademi, Dongwoo Kim, Young Jin Kim, Gina Lee, Jinyu Li, Yunsheng Li, Chen Liang, Xihui Lin, Zeqi Lin, Mengchen Liu, Yang Liu, Gilsinia Lopez, Chong Luo, Piyush Madan, Vadim Mazalov, Ali Mousavi, Anh Nguyen, Jing Pan, Daniel Perez-Becker, Jacob Platin, Thomas Portet, Kai Qiu, Bo Ren, Liliang Ren, Sambuddha Roy, Ning Shang, Yelong Shen, Saksham Singhal, Subhojit Som, Xia Song, Tetyana Sych, Praneetha Vaddamanu, Shuohang Wang, Yiming Wang, Zhenghao Wang, Haibin Wu, Haoran Xu, Weijian Xu, Yifan Yang, Ziyi Yang, Donghan Yu, Ishmam Zabir, Jianwen Zhang, Li Lyna Zhang, Yunan Zhang, Xiren Zhou
cs.AI

摘要

我们推出Phi-4-Mini和Phi-4-Multimodal,这两款模型虽紧凑却能力出众,分别专注于语言和多模态任务。Phi-4-Mini是一款拥有38亿参数的语言模型,基于高质量网络数据与合成数据训练而成,在数学和编程等需要复杂推理的任务上,不仅显著超越近期同规模的开源模型,还能与规模为其两倍的模型相媲美。这一成就得益于精心设计的合成数据配方,特别强调高质量的数学与编程数据集。相较于前代Phi-3.5-Mini,Phi-4-Mini将词汇量扩展至20万词元,以更好地支持多语言应用,并采用分组查询注意力机制,提升了长序列生成的效率。Phi-4-Multimodal则是一款多模态模型,将文本、视觉及语音/音频输入模态整合于一体。其创新的模态扩展方法利用LoRA适配器和模态特定路由器,实现了多种模态的无干扰组合推理。例如,尽管其语音/音频模态的LoRA组件仅含4.6亿参数,却已在OpenASR排行榜上位居首位。Phi-4-Multimodal支持(视觉+语言)、(视觉+语音)及(语音/音频)输入场景,在多项任务上超越了更大规模的视觉-语言和语音-语言模型。此外,我们还对Phi-4-Mini进行了进一步训练实验,以增强其推理能力。尽管该实验版模型仅有38亿参数,其推理性能却与DeepSeek-R1-Distill-Qwen-7B和DeepSeek-R1-Distill-Llama-8B等显著更大的模型相当甚至更优。
English
We introduce Phi-4-Mini and Phi-4-Multimodal, compact yet highly capable language and multimodal models. Phi-4-Mini is a 3.8-billion-parameter language model trained on high-quality web and synthetic data, significantly outperforming recent open-source models of similar size and matching the performance of models twice its size on math and coding tasks requiring complex reasoning. This achievement is driven by a carefully curated synthetic data recipe emphasizing high-quality math and coding datasets. Compared to its predecessor, Phi-3.5-Mini, Phi-4-Mini features an expanded vocabulary size of 200K tokens to better support multilingual applications, as well as group query attention for more efficient long-sequence generation. Phi-4-Multimodal is a multimodal model that integrates text, vision, and speech/audio input modalities into a single model. Its novel modality extension approach leverages LoRA adapters and modality-specific routers to allow multiple inference modes combining various modalities without interference. For example, it now ranks first in the OpenASR leaderboard to date, although the LoRA component of the speech/audio modality has just 460 million parameters. Phi-4-Multimodal supports scenarios involving (vision + language), (vision + speech), and (speech/audio) inputs, outperforming larger vision-language and speech-language models on a wide range of tasks. Additionally, we experiment to further train Phi-4-Mini to enhance its reasoning capabilities. Despite its compact 3.8-billion-parameter size, this experimental version achieves reasoning performance on par with or surpassing significantly larger models, including DeepSeek-R1-Distill-Qwen-7B and DeepSeek-R1-Distill-Llama-8B.

Summary

AI-Generated Summary

PDF726March 4, 2025