Abstract
Recently, diffusion-based purification (DBP) has emerged as a promising approach for defending against adversarial attacks. However, previous studies have used questionable methods to evaluate the robustness of DBP models, their explanations of DBP robustness also lack experimental support. We re-examine DBP robustness using precise gradient, and discuss the impact of stochasticity on DBP robustness. To better explain DBP robustness, we assess DBP robustness under a novel attack setting, Deterministic White-box, and pinpoint stochasticity as the main factor in DBP robustness. Our results suggest that DBP models rely on stochasticity to evade the most effective attack direction, rather than directly countering adversarial perturbations. To improve the robustness of DBP models, we propose Adversarial Denoising Diffusion Training (ADDT). This technique uses Classifier-Guided Perturbation Optimization (CGPO) to generate adversarial perturbation through guidance from a pre-trained classifier, and uses Rank-Based Gaussian Mapping (RBGM) to convert adversarial pertubation into a normal Gaussian distribution. Empirical results show that ADDT improves the robustness of DBP models. Further experiments confirm that ADDT equips DBP models with the ability to directly counter adversarial perturbations.
Abstract (translated)
近年来,扩散基净化(DBP)作为一种对抗性攻击防御方法,受到了越来越多的关注。然而,之前的研究在评估 DBP 模型的鲁棒性时使用了值得怀疑的方法,并且 DBP 模型的抗干扰性解释缺乏实验支持。我们使用精确的梯度重新审视 DBP 的鲁棒性,并讨论随机性对 DBP 鲁棒性的影响。为了更好地解释 DBP 的鲁棒性,我们在新的攻击场景——确定性白盒下评估 DBP 的鲁棒性,并确定随机性是 DBP 鲁棒性的主要因素。我们的结果表明,DBP 模型依赖于随机性来避开最有效的攻击方向,而不是直接对抗 adversarial 扰动。为了提高 DBP 模型的鲁棒性,我们提出了 Adversarial Denoising Diffusion Training(ADDT)。这种技术利用先验分类器指导的扰动优化(CGPO)生成 adversarial 扰动,并使用基于排名的高斯映射(RBGM)将 adversarial 扰动转换为正态高斯分布。实验结果表明,ADDT 改善了 DBP 模型的鲁棒性。进一步的实验证实了 ADDT 为 DBP 模型提供了直接对抗 adversarial 扰动的能力。
URL
https://arxiv.org/abs/2404.14309