Paper Reading AI Learner

FedRare: Federated Learning with Intra- and Inter-Client Contrast for Effective Rare Disease Classification

2022-06-28 07:37:38
Nannan Wu, Li Yu, Xin Yang, Kwang-Ting Cheng, Zengqiang Yan

Abstract

Federated learning (FL), enabling different medical institutions or clients to train a model collaboratively without data privacy leakage, has drawn great attention in medical imaging communities recently. Though inter-client data heterogeneity has been thoroughly studied, the class imbalance problem due to the existence of rare diseases still is under-explored. In this paper, we propose a novel FL framework FedRare for medical image classification especially on dealing with data heterogeneity with the existence of rare diseases. In FedRare, each client trains a model locally to extract highly-separable latent features for classification via intra-client supervised contrastive learning. Considering the limited data on rare diseases, we build positive sample queues for augmentation (i.e. data re-sampling). The server in FedRare would collect the latent features from clients and automatically select the most reliable latent features as guidance sent back to clients. Then, each client is jointly trained by an inter-client contrastive loss to align its latent features to the federated latent features of full classes. In this way, the parameter/feature variances across clients are effectively minimized, leading to better convergence and performance improvements. Experimental results on the publicly-available dataset for skin lesion diagnosis demonstrate FedRare's superior performance. Under the 10-client federated setting where four clients have no rare disease samples, FedRare achieves an average increase of 9.60% and 5.90% in balanced accuracy compared to the baseline framework FedAvg and the state-of-the-art approach FedIRM respectively. Considering the board existence of rare diseases in clinical scenarios, we believe FedRare would benefit future FL framework design for medical image classification. The source code of this paper is publicly available at this https URL.

Abstract (translated)

URL

https://arxiv.org/abs/2206.13803

PDF

https://arxiv.org/pdf/2206.13803.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot