Paper Reading AI Learner

From Optimization to Generalization: Fair Federated Learning against Quality Shift via Inter-Client Sharpness Matching

2024-04-27 07:05:41
Nannan Wu, Zhuo Kuang, Zengqiang Yan, Li Yu

Abstract

Due to escalating privacy concerns, federated learning has been recognized as a vital approach for training deep neural networks with decentralized medical data. In practice, it is challenging to ensure consistent imaging quality across various institutions, often attributed to equipment malfunctions affecting a minority of clients. This imbalance in image quality can cause the federated model to develop an inherent bias towards higher-quality images, thus posing a severe fairness issue. In this study, we pioneer the identification and formulation of this new fairness challenge within the context of the imaging quality shift. Traditional methods for promoting fairness in federated learning predominantly focus on balancing empirical risks across diverse client distributions. This strategy primarily facilitates fair optimization across different training data distributions, yet neglects the crucial aspect of generalization. To address this, we introduce a solution termed Federated learning with Inter-client Sharpness Matching (FedISM). FedISM enhances both local training and global aggregation by incorporating sharpness-awareness, aiming to harmonize the sharpness levels across clients for fair generalization. Our empirical evaluations, conducted using the widely-used ICH and ISIC 2019 datasets, establish FedISM's superiority over current state-of-the-art federated learning methods in promoting fairness. Code is available at this https URL.

Abstract (translated)

由于隐私问题不断升级,联邦学习被认为是一种通过分布式医疗数据训练深度神经网络的重要方法。在实践中,确保各个机构之间保持一致的图像质量是非常具有挑战性的,这种情况通常归因于影响少数客户端设备的故障。这种图像质量的不平衡可能导致联邦模型过于关注高质量图像,从而引发严重的不公平问题。在本文中,我们在图像质量变化背景下,首创了关于这个新公平挑战的识别和阐述。 传统方法在促进分布式学习中的公平性方面主要关注平衡不同客户端分布的实证风险。这种策略主要通过不同训练数据分布的公平优化来促进公平,然而却忽略了泛化的重要性。为解决这个问题,我们引入了一种名为“联邦学习与客户端尖度匹配”(FedISM)的解决方案。FedISM通过引入尖度意识来增强局部训练和全局聚合,旨在统一客户端的尖度水平以进行公平的泛化。 我们的实证评估使用广泛使用的ICH和ISIC 2019数据集进行,结果表明,FedISM在促进公平方面优于当前的分布式学习方法。代码可在此链接处获取:https:// this URL.

URL

https://arxiv.org/abs/2404.17805

PDF

https://arxiv.org/pdf/2404.17805.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot