DiLoCo中重叠通信与计算的急切更新机制
Eager Updates For Overlapped Communication and Computation in DiLoCo
February 18, 2025
作者: Satyen Kale, Arthur Douillard, Yanislav Donchev
cs.AI
摘要
诸如DiLoCo之类的分布式优化方法已被证明在跨多个分布式工作节点(如数据中心)训练超大规模模型时效果显著。这些方法将更新过程分为两部分:内部优化阶段,各工作节点在其本地数据上独立执行多步优化;外部优化步骤,对内部更新进行同步。尽管此类方法相比标准的数据并行训练所需的通信量减少了数个数量级,但在工作节点为数据中心的场景下,这些方法即便有限的通信需求仍可能因每次外部优化步骤所需的阻塞而导致显著的性能下降。本文探讨了通过将通信与计算重叠的技术来缓解这一问题,使得外部优化步骤能够完全与内部优化阶段重叠。我们展示了一种特定变体,称为“急切更新”,在节点间带宽较低的环境中,其性能与标准DiLoCo相当。
English
Distributed optimization methods such as DiLoCo have been shown to be
effective in training very large models across multiple distributed workers,
such as datacenters. These methods split updates into two parts: an inner
optimization phase, where the workers independently execute multiple
optimization steps on their own local data, and an outer optimization step,
where the inner updates are synchronized. While such approaches require orders
of magnitude less communication than standard data-parallel training, in
settings where the workers are datacenters, even the limited communication
requirements of these approaches can still cause significant slow downs due to
the blocking necessary at each outer optimization step. In this paper, we
investigate techniques to mitigate this issue by overlapping communication with
computation in a manner that allows the outer optimization step to fully
overlap with the inner optimization phase. We show that a particular variant,
dubbed eager updates, provides competitive performance with standard DiLoCo in
settings with low bandwidth between workers.Summary
AI-Generated Summary