Abstract
Federated Learning (FL) is a form of distributed learning that allows multiple institutions or clients to collaboratively learn a global model to solve a task. This allows the model to utilize the information from every institute while preserving data privacy. However, recent studies show that the promise of protecting the privacy of data is not upheld by existing methods and that it is possible to recreate the training data from the different institutions. This is done by utilizing gradients transferred between the clients and the global server during training or by knowing the model architecture at the client end. In this paper, we propose a federated learning framework for semantic segmentation without knowing the model architecture nor transferring gradients between the client and the server, thus enabling better privacy preservation. We propose BlackFed - a black-box adaptation of neural networks that utilizes zero order optimization (ZOO) to update the client model weights and first order optimization (FOO) to update the server weights. We evaluate our approach on several computer vision and medical imaging datasets to demonstrate its effectiveness. To the best of our knowledge, this work is one of the first works in employing federated learning for segmentation, devoid of gradients or model information exchange. Code: this https URL
Abstract (translated)
联邦学习(FL)是一种分布式学习形式,它允许多个机构或客户端共同学习一个全局模型来解决任务。这使模型能够利用每个机构的信息同时保护数据隐私。然而,最近的研究表明,现有的方法并没有实现保护数据隐私的承诺,并且有可能从不同的机构中复原训练数据。这是通过利用在训练过程中客户与全球服务器之间传输的梯度或者知道客户端端的模型架构来完成的。 在这篇论文中,我们提出了一种不需要了解模型架构也不需要在客户端和服务器间传递梯度的语义分割联邦学习框架,从而实现更好的隐私保护。我们提出了BlackFed——一种基于零阶优化(ZOO)更新客户端模型权重,并利用一阶优化(FOO)更新服务器权重的神经网络黑箱适应方法。我们在多个计算机视觉和医学影像数据集上评估了我们的方法的有效性。 据我们所知,这是首次在没有梯度或模型信息交换的情况下使用联邦学习进行分割的工作之一。代码:此 https URL
URL
https://arxiv.org/abs/2410.24181