Abstract
Deep neural networks are widely recognized as being vulnerable to adversarial perturbation. To overcome this challenge, developing a robust classifier is crucial. So far, two well-known defenses have been adopted to improve the learning of robust classifiers, namely adversarial training (AT) and Jacobian regularization. However, each approach behaves differently against adversarial perturbations. First, our work carefully analyzes and characterizes these two schools of approaches, both theoretically and empirically, to demonstrate how each approach impacts the robust learning of a classifier. Next, we propose our novel Optimal Transport with Jacobian regularization method, dubbed OTJR, jointly incorporating the input-output Jacobian regularization into the AT by leveraging the optimal transport theory. In particular, we employ the Sliced Wasserstein (SW) distance that can efficiently push the adversarial samples' representations closer to those of clean samples, regardless of the number of classes within the dataset. The SW distance provides the adversarial samples' movement directions, which are much more informative and powerful for the Jacobian regularization. Our extensive experiments demonstrate the effectiveness of our proposed method, which jointly incorporates Jacobian regularization into AT. Furthermore, we demonstrate that our proposed method consistently enhances the model's robustness with CIFAR-100 dataset under various adversarial attack settings, achieving up to 28.49% under AutoAttack.
Abstract (translated)
深度学习网络被广泛认为是易受攻击的。要克服这一挑战,开发一个可靠的分类器是至关重要的。目前,已经采用了两个已知的防御方法来改进可靠的分类器的学习,即对抗训练(AT)和贾酸碱 regularization。然而,每种方法对攻击响应的方式都不同。我们首先仔细分析和总结了这两个方法的理论和实证特征,以证明每种方法如何影响可靠的分类器学习。接下来,我们提出了我们的新型最优传输与贾酸碱 regularization方法,称为OTJR,同时将输入输出贾酸碱 regularization融入AT中,利用最优传输理论进行借鉴。特别地,我们采用了sliced Wasserstein距离,它可以高效地将攻击样本的表示接近于干净样本的表示,无论数据集中的类别数量。SW距离提供了攻击样本的运动方向,这对贾酸碱 regularization非常 informative和强大。我们的广泛实验证明了我们提出的新方法的有效性,该方法将贾酸碱 regularization融入AT中。此外,我们证明了我们提出的方法 consistently enhance the model's robustness with the CIFAR-100 dataset under various adversarial attack settings, achieving up to 28.49% under AutoAttack.
URL
https://arxiv.org/abs/2303.11793