Abstract
In Federated Learning (FL), training is conducted on client devices, typically with limited computational resources and storage capacity. To address these constraints, we propose an automatic pruning scheme tailored for FL systems. Our solution improves computation efficiency on client devices, while minimizing communication costs. One of the challenges of tuning pruning hyper-parameters in FL systems is the restricted access to local data. Thus, we introduce an automatic pruning paradigm that dynamically determines pruning boundaries. Additionally, we utilized a structured pruning algorithm optimized for mobile devices that lack hardware support for sparse computations. Experimental results demonstrate the effectiveness of our approach, achieving accuracy comparable to existing methods. Our method notably reduces the number of parameters by 89% and FLOPS by 90%, with minimal impact on the accuracy of the FEMNIST and CelebFaces datasets. Furthermore, our pruning method decreases communication overhead by up to 5x and halves inference time when deployed on Android devices.
Abstract (translated)
在联邦学习(FL)中,训练通常在客户端设备上进行,这些设备往往计算资源和存储容量有限。为了解决这些问题,我们提出了一种专门针对FL系统的自动剪枝方案。我们的解决方案提高了客户端设备的计算效率,同时最大限度地减少了通信成本。调优FL系统中的剪枝超参数面临的一个挑战是受限于对本地数据的访问。因此,我们引入了一个能够动态确定剪枝边界的自动剪枝范式。此外,我们还使用了一种专门为缺乏稀疏计算硬件支持的移动设备优化的结构化剪枝算法。实验结果表明了我们的方法的有效性,其准确率与现有方法相当。我们的方法显著减少了参数数量(减少了89%)和浮点运算次数(FLOPS减少90%),对FEMNIST和CelebFaces数据集的准确性影响极小。此外,当部署在Android设备上时,我们的剪枝方法最多可以将通信开销降低5倍,并且推理时间减少一半。
URL
https://arxiv.org/abs/2411.01759