Abstract
In federated learning, particularly in cross-device scenarios, secure aggregation has recently gained popularity as it effectively defends against inference attacks by malicious aggregators. However, secure aggregation often requires additional communication overhead and can impede the convergence rate of the global model, which is particularly challenging in wireless network environments with extremely limited bandwidth. Therefore, achieving efficient communication compression under the premise of secure aggregation presents a highly challenging and valuable problem. In this work, we propose a novel uplink communication compression method for federated learning, named FedMPQ, which is based on multi shared codebook product quantization.Specifically, we utilize updates from the previous round to generate sufficiently robust codebooks. Secure aggregation is then achieved through trusted execution environments (TEE) or a trusted third party (TTP).In contrast to previous works, our approach exhibits greater robustness in scenarios where data is not independently and identically distributed (non-IID) and there is a lack of sufficient public data. The experiments conducted on the LEAF dataset demonstrate that our proposed method achieves 99% of the baseline's final accuracy, while reducing uplink communications by 90-95%
Abstract (translated)
在联邦学习尤其是在跨设备场景中,安全聚合最近 gained popularity,因为它有效地防御了恶意聚合器的推理攻击。然而,安全聚合通常需要额外的通信开销,并可能阻碍全局模型的收敛速度,尤其是在无线网络环境中,带宽极其有限。因此,在实现安全聚合前提下实现高效的通信压缩是一项非常具有挑战性和价值的问题。 在这项工作中,我们提出了一种名为FedMPQ的新颖的跨设备通信压缩方法,基于多共享码簿量化。具体来说,我们利用前一轮的更新生成足够健壮的码簿。然后通过可信执行环境(TEE)或可信第三方(TTP)实现安全聚合。 与之前的工作相比,我们的方法在数据不独立且不均匀分布(非IID)场景以及缺乏充分公共数据的情况下表现出更大的稳健性。LEAF数据集的实验结果表明,与基线相比,我们提出的方法在准确度上实现了99%的提高,同时将上下文通信减少了90-95%。
URL
https://arxiv.org/abs/2404.13575