Paper Reading AI Learner

FedMPQ: Secure and Communication-Efficient Federated Learning with Multi-codebook Product Quantization

2024-04-21 08:27:36
Xu Yang, Jiapeng Zhang, Qifeng Zhang, Zhuo Tang

Abstract

In federated learning, particularly in cross-device scenarios, secure aggregation has recently gained popularity as it effectively defends against inference attacks by malicious aggregators. However, secure aggregation often requires additional communication overhead and can impede the convergence rate of the global model, which is particularly challenging in wireless network environments with extremely limited bandwidth. Therefore, achieving efficient communication compression under the premise of secure aggregation presents a highly challenging and valuable problem. In this work, we propose a novel uplink communication compression method for federated learning, named FedMPQ, which is based on multi shared codebook product quantization.Specifically, we utilize updates from the previous round to generate sufficiently robust codebooks. Secure aggregation is then achieved through trusted execution environments (TEE) or a trusted third party (TTP).In contrast to previous works, our approach exhibits greater robustness in scenarios where data is not independently and identically distributed (non-IID) and there is a lack of sufficient public data. The experiments conducted on the LEAF dataset demonstrate that our proposed method achieves 99% of the baseline's final accuracy, while reducing uplink communications by 90-95%

Abstract (translated)

在联邦学习尤其是在跨设备场景中,安全聚合最近 gained popularity,因为它有效地防御了恶意聚合器的推理攻击。然而,安全聚合通常需要额外的通信开销,并可能阻碍全局模型的收敛速度,尤其是在无线网络环境中,带宽极其有限。因此,在实现安全聚合前提下实现高效的通信压缩是一项非常具有挑战性和价值的问题。 在这项工作中,我们提出了一种名为FedMPQ的新颖的跨设备通信压缩方法,基于多共享码簿量化。具体来说,我们利用前一轮的更新生成足够健壮的码簿。然后通过可信执行环境(TEE)或可信第三方(TTP)实现安全聚合。 与之前的工作相比,我们的方法在数据不独立且不均匀分布(非IID)场景以及缺乏充分公共数据的情况下表现出更大的稳健性。LEAF数据集的实验结果表明,与基线相比,我们提出的方法在准确度上实现了99%的提高,同时将上下文通信减少了90-95%。

URL

https://arxiv.org/abs/2404.13575

PDF

https://arxiv.org/pdf/2404.13575.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot