Abstract
Federated learning (FL) empowers privacy-preservation in model training by only exposing users' model gradients. Yet, FL users are susceptible to the gradient inversion (GI) attack which can reconstruct ground-truth training data such as images based on model gradients. However, reconstructing high-resolution images by existing GI attack works faces two challenges: inferior accuracy and slow-convergence, especially when the context is complicated, e.g., the training batch size is much greater than 1 on each FL user. To address these challenges, we present a Robust, Accurate and Fast-convergent GI attack algorithm, called RAF-GI, with two components: 1) Additional Convolution Block (ACB) which can restore labels with up to 20% improvement compared with existing works; 2) Total variance, three-channel mEan and cAnny edge detection regularization term (TEA), which is a white-box attack strategy to reconstruct images based on labels inferred by ACB. Moreover, RAF-GI is robust that can still accurately reconstruct ground-truth data when the users' training batch size is no more than 48. Our experimental results manifest that RAF-GI can diminish 94% time costs while achieving superb inversion quality in ImageNet dataset. Notably, with a batch size of 1, RAF-GI exhibits a 7.89 higher Peak Signal-to-Noise Ratio (PSNR) compared to the state-of-the-art baselines.
Abstract (translated)
联邦学习(FL)通过仅暴露用户的模型梯度来实现模型的隐私保护。然而,FL用户易受到梯度反向(GI)攻击的攻击,该攻击可以根据模型梯度重构训练数据,如图像。然而,通过现有的GI攻击重构高分辨率图像面临着两个挑战:准确性和收敛速度,尤其是在复杂背景下,例如每个FL用户的训练批量大小远大于1。为了应对这些挑战,我们提出了一个鲁棒、准确且收敛速度快的GI攻击算法,称为RAF-GI,包含两个组件:1)附加卷积层(ACB),它可以比现有工作最多提高20%的标签恢复;2)总方差,三个通道的mEan和cCanny边缘检测正则化项(TEA),这是一种白盒攻击策略,用于根据ACB推断的标签重构图像。此外,RAF-GI具有鲁棒性,即使在用户训练批量大小不超过48时,仍能准确地重构地面真实数据。我们的实验结果表明,RAF-GI可以在ImageNet数据集上减少94%的时间开销,同时具有出色的逆向质量。值得注意的是,当批量为1时,RAF-GI显示出比最先进的基准模型高出7.89倍的峰值信号-噪声比(PSNR)。
URL
https://arxiv.org/abs/2403.08383