Abstract
Model adaptation aims at solving the domain transfer problem under the constraint of only accessing the pretrained source models. With the increasing considerations of data privacy and transmission efficiency, this paradigm has been gaining recent popularity. This paper studies the vulnerability to universal attacks transferred from the source domain during model adaptation algorithms due to the existence of the malicious providers. We explore both universal adversarial perturbations and backdoor attacks as loopholes on the source side and discover that they still survive in the target models after adaptation. To address this issue, we propose a model preprocessing framework, named AdaptGuard, to improve the security of model adaptation algorithms. AdaptGuard avoids direct use of the risky source parameters through knowledge distillation and utilizes the pseudo adversarial samples under adjusted radius to enhance the robustness. AdaptGuard is a plug-and-play module that requires neither robust pretrained models nor any changes for the following model adaptation algorithms. Extensive results on three commonly used datasets and two popular adaptation methods validate that AdaptGuard can effectively defend against universal attacks and maintain clean accuracy in the target domain simultaneously. We hope this research will shed light on the safety and robustness of transfer learning.
Abstract (translated)
模型适应旨在解决只有访问预先训练的源模型才能解决的问题,以满足越来越多的数据隐私和传输效率考虑。随着对数据隐私和传输效率的日益关注,这种范式正在逐渐流行起来。本文研究由于存在恶意提供者,模型适应算法在源域中传输的通用攻击的脆弱性。我们探索了通用dversarial perturbations和后门攻击在源域中的漏洞,并发现在适应后,它们仍然在目标模型中存活。为了解决这一问题,我们提出了一个模型预处理框架,名为 adaptGuard,以改善模型适应算法的安全性。 adaptGuard避免直接使用有风险的源参数,通过知识蒸馏使用调整半径的伪dversarial样本增强鲁棒性。 adaptGuard是一个可插拔模块,不需要强大的预先训练模型或任何变化,适用于三个常用的数据集和两个流行的适应方法。广泛的结果显示, adaptGuard可以在目标域中有效地防御通用攻击,同时保持干净的准确性。我们希望这项研究将阐明迁移学习的安全和鲁棒性。
URL
https://arxiv.org/abs/2303.10594