阅读:将LLMs重构为与系统共同设计的路由器解耦专家混合模型
Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design
October 24, 2024
作者: Ruisi Cai, Yeonju Ro, Geon-Woo Kim, Peihao Wang, Babak Ehteshami Bejnordi, Aditya Akella, Zhangyang Wang
cs.AI
摘要
大型语言模型(LLMs)的普及导致了混合专家(MoE)架构的采用,该架构动态利用专门的子网络以提高效率和性能。尽管MoE模型具有诸多优点,但在推断过程中面临着重大挑战,包括由于模型架构与系统策略之间设计选择不一致而导致的内存管理低效和批处理次优。此外,从头开始训练MoEs的传统方法在成本方面日益不可行。本文提出了一个新颖的框架 Read-ME,将预训练的密集LLMs转换为较小的MoE模型(与“升级”通用MoEs相反),避免了从头开始训练的高成本。我们的方法利用激活稀疏性来提取专家。为了构成专家,我们检查了广泛采用的逐层路由器设计,并展示了其冗余性,因此我们引入了与MoE骨干解耦的预门控路由器,促进了系统友好的预计算和前瞻调度,增强了专家感知的批处理和缓存。因此,我们的共同设计解决了算法和系统两方面的关键差距,在资源受限环境中为LLM推断建立了一种可扩展且高效的替代方案。Read-ME在MMLU上的表现优于其他流行的开源密集模型,实现了高达10.1%的改进,并将端到端平均延迟提高了高达6.1%。代码可在以下链接获取:https://github.com/VITA-Group/READ-ME。
English
The proliferation of large language models (LLMs) has led to the adoption of
Mixture-of-Experts (MoE) architectures that dynamically leverage specialized
subnetworks for improved efficiency and performance. Despite their benefits,
MoE models face significant challenges during inference, including inefficient
memory management and suboptimal batching, due to misaligned design choices
between the model architecture and the system policies. Furthermore, the
conventional approach of training MoEs from scratch is increasingly prohibitive
in terms of cost. In this paper, we propose a novel framework Read-ME that
transforms pre-trained dense LLMs into smaller MoE models (in contrast to
"upcycling" generalist MoEs), avoiding the high costs of ground-up training.
Our approach employs activation sparsity to extract experts. To compose
experts, we examine the widely-adopted layer-wise router design and show its
redundancy, and thus we introduce the pre-gating router decoupled from the MoE
backbone that facilitates system-friendly pre-computing and lookahead
scheduling, enhancing expert-aware batching and caching. Our codesign therefore
addresses critical gaps on both the algorithmic and system fronts, establishing
a scalable and efficient alternative for LLM inference in resource-constrained
settings. Read-ME outperforms other popular open-source dense models of similar
scales, achieving improvements of up to 10.1% on MMLU, and improving mean
end-to-end latency up to 6.1%. Codes are available at:
https://github.com/VITA-Group/READ-ME.Summary
AI-Generated Summary