Abstract
Transfer learning is a popular method for tuning pretrained (upstream) models for different downstream tasks using limited data and computational resources. We study how an adversary with control over an upstream model used in transfer learning can conduct property inference attacks on a victim's tuned downstream model. For example, to infer the presence of images of a specific individual in the downstream training set. We demonstrate attacks in which an adversary can manipulate the upstream model to conduct highly effective and specific property inference attacks (AUC score $> 0.9$), without incurring significant performance loss on the main task. The main idea of the manipulation is to make the upstream model generate activations (intermediate features) with different distributions for samples with and without a target property, thus enabling the adversary to distinguish easily between downstream models trained with and without training examples that have the target property. Our code is available at this https URL.
Abstract (translated)
迁移学习是一种利用有限数据和计算资源,通过有限数据量和计算资源微调预训练(上游)模型,以不同下游任务的方法。我们研究如何利用控制上游模型的dversarial来对受害者微调的下游模型进行属性推断攻击。例如,推断特定个体在下游训练集中存在的图像。我们演示了攻击,dversarial可以通过操纵上游模型进行高度有效和特定的属性推断攻击(AUC得分大于0.9),而在主要任务中并未导致显著性能损失。操纵的主要思想是使上游模型产生具有与目标属性不同分布的激活(中间特征),从而使dversarial可以轻松地区分训练样本是否具有目标属性的下游模型。我们的代码可在这个httpsURL上获取。
URL
https://arxiv.org/abs/2303.11643