Abstract
Asynchronous stochastic gradient descent (SGD) is attractive from a speed perspective because workers do not wait for synchronization. However, the Transformer model converges poorly with asynchronous SGD, resulting in substantially lower quality compared to synchronous SGD. To investigate why this is the case, we isolate differences between asynchronous and synchronous methods to investigate batch size and staleness effects. We find that summing several asynchronous updates, rather than applying them immediately, restores convergence behavior. With this hybrid method, Transformer training for neural machine translation task reaches a near-convergence level 1.36x faster in single-node multi-GPU training with no impact on model quality.
Abstract (translated)
异步随机梯度下降(SGD)从速度的角度来看是有吸引力的,因为工作人员不等待同步。然而,变压器模型与异步SGD的收敛性很差,因此与同步SGD相比,质量要低得多。为了研究这种情况的原因,我们隔离了异步和同步方法之间的差异,以研究批大小和过时效果。我们发现,汇总几个异步更新,而不是立即应用它们,可以恢复收敛行为。这种混合方法,神经网络机器翻译任务的变压器训练在单节点多GPU训练中达到接近1.36x的速度,对模型质量没有影响。
URL
https://arxiv.org/abs/1906.03496