Abstract
Data augmentation is widely used to mitigate data bias in the training dataset. However, data augmentation exposes machine learning models to privacy attacks, such as membership inference attacks. In this paper, we propose an effective combination of data augmentation and machine unlearning, which can reduce data bias while providing a provable defense against known attacks. Specifically, we maintain the fairness of the trained model with diffusion-based data augmentation, and then utilize multi-shard unlearning to remove identifying information of original data from the ML model for protection against privacy attacks. Experimental evaluation across diverse datasets demonstrates that our approach can achieve significant improvements in bias reduction as well as robustness against state-of-the-art privacy attacks.
Abstract (translated)
数据增强在训练数据中广泛应用,以减轻数据偏差。然而,数据增强会暴露机器学习模型到诸如成员推断攻击等隐私攻击。在本文中,我们提出了一种有效的数据增强和机器学习相结合的方法,可以在减轻数据偏差的同时为已知攻击提供有理的防御。具体来说,我们在基于扩散的数据增强上保留训练模型的公平性,然后利用多片分箱学习消除原始数据的识别信息,以保护隐私攻击。在多样数据集的实验评估中,我们的方法在减轻数据偏差和应对最先进的隐私攻击方面取得了显著的改进。
URL
https://arxiv.org/abs/2404.13194