Abstract
Federated learning (FL) is a general principle for decentralized clients to train a server model collectively without sharing local data. FL is a promising framework with practical applications, but its standard training paradigm requires the clients to backpropagate through the model to compute gradients. Since these clients are typically edge devices and not fully trusted, executing backpropagation on them incurs computational and storage overhead as well as white-box vulnerability. In light of this, we develop backpropagation-free federated learning, dubbed BAFFLE, in which backpropagation is replaced by multiple forward processes to estimate gradients. BAFFLE is 1) memory-efficient and easily fits uploading bandwidth; 2) compatible with inference-only hardware optimization and model quantization or pruning; and 3) well-suited to trusted execution environments, because the clients in BAFFLE only execute forward propagation and return a set of scalars to the server. Empirically we use BAFFLE to train deep models from scratch or to finetune pretrained models, achieving acceptable results. Code is available in this https URL.
Abstract (translated)
分布式学习(FL)是一个一般性原则,旨在分散化的客户端集体训练服务器模型,而无需共享本地数据。FL是一个有前途的框架,具有实际应用,但其标准培训范式要求客户端通过模型进行反向传播来计算梯度。由于这些客户端通常边缘设备,并不能完全信任它们,执行反向传播对这些客户端会产生计算和存储 overhead,以及白盒漏洞。鉴于这一点,我们开发了无反向传播的分布式学习,称为BAFFLE,其中反向传播被替代为多个向前进程来估计梯度。BAFFLE具有1)高效的内存利用率,可以轻松适应上传带宽;2)与只推理硬件优化和模型量化或修剪兼容;3)适用于信任执行环境,因为BAFFLE的客户端仅执行向前传播,并向服务器返回一组向量。经验证,我们使用BAFFLE从零开始训练深度模型,或微调已训练模型,取得了可以接受的结果。代码可在这个httpsURL上可用。
URL
https://arxiv.org/abs/2301.12195