Abstract
Communication bottlenecks hinder the scalability of distributed neural network training, particularly on distributed-memory computing clusters. To significantly reduce this communication overhead, we introduce AB-training, a novel data-parallel training method that decomposes weight matrices into low-rank representations and utilizes independent group-based training. This approach consistently reduces network traffic by 50% across multiple scaling scenarios, increasing the training potential on communication-constrained systems. Our method exhibits regularization effects at smaller scales, leading to improved generalization for models like VGG16, while achieving a remarkable 44.14 : 1 compression ratio during training on CIFAR-10 and maintaining competitive accuracy. Albeit promising, our experiments reveal that large batch effects remain a challenge even in low-rank training regimes.
Abstract (translated)
通信瓶颈阻碍了分布式神经网络训练的可扩展性,特别是在分布式内存计算集群上。为了显著降低这种通信开销,我们引入了AB-训练,一种新颖的数据并行训练方法,将权重矩阵分解为低秩表示,并利用独立组基训练。这种方法在多个缩放场景下,将网络流量减少50%,增加了在通信受限系统上的训练潜力。我们的方法在较小的尺度上表现出正则化效应,使得像VGG16这样的模型具有更好的泛化能力,而在CIFAR-10上训练时,取得了惊人的44.14 : 1的压缩比,保持竞争力的准确性。尽管前景看好,但我们的实验表明,在低秩训练环境中,大批效应仍然是一个挑战。
URL
https://arxiv.org/abs/2405.01067