Abstract
Due to escalating privacy concerns, federated learning has been recognized as a vital approach for training deep neural networks with decentralized medical data. In practice, it is challenging to ensure consistent imaging quality across various institutions, often attributed to equipment malfunctions affecting a minority of clients. This imbalance in image quality can cause the federated model to develop an inherent bias towards higher-quality images, thus posing a severe fairness issue. In this study, we pioneer the identification and formulation of this new fairness challenge within the context of the imaging quality shift. Traditional methods for promoting fairness in federated learning predominantly focus on balancing empirical risks across diverse client distributions. This strategy primarily facilitates fair optimization across different training data distributions, yet neglects the crucial aspect of generalization. To address this, we introduce a solution termed Federated learning with Inter-client Sharpness Matching (FedISM). FedISM enhances both local training and global aggregation by incorporating sharpness-awareness, aiming to harmonize the sharpness levels across clients for fair generalization. Our empirical evaluations, conducted using the widely-used ICH and ISIC 2019 datasets, establish FedISM's superiority over current state-of-the-art federated learning methods in promoting fairness. Code is available at this https URL.
Abstract (translated)
由于隐私问题不断升级,联邦学习被认为是一种通过分布式医疗数据训练深度神经网络的重要方法。在实践中,确保各个机构之间保持一致的图像质量是非常具有挑战性的,这种情况通常归因于影响少数客户端设备的故障。这种图像质量的不平衡可能导致联邦模型过于关注高质量图像,从而引发严重的不公平问题。在本文中,我们在图像质量变化背景下,首创了关于这个新公平挑战的识别和阐述。 传统方法在促进分布式学习中的公平性方面主要关注平衡不同客户端分布的实证风险。这种策略主要通过不同训练数据分布的公平优化来促进公平,然而却忽略了泛化的重要性。为解决这个问题,我们引入了一种名为“联邦学习与客户端尖度匹配”(FedISM)的解决方案。FedISM通过引入尖度意识来增强局部训练和全局聚合,旨在统一客户端的尖度水平以进行公平的泛化。 我们的实证评估使用广泛使用的ICH和ISIC 2019数据集进行,结果表明,FedISM在促进公平方面优于当前的分布式学习方法。代码可在此链接处获取:https:// this URL.
URL
https://arxiv.org/abs/2404.17805