Abstract
Deep neural networks (DNNs) are known to be vulnerable to adversarial examples that are crafted with imperceptible perturbations, i.e., a small change in an input image can induce a mis-classification, and thus threatens the reliability of deep learning based deployment systems. Adversarial training (AT) is frequently used to improve the robustness of DNNs, which can improve the robustness in training a mixture of corrupted and clean data. However, existing AT based methods are either computationally expensive in generating such adversarial examples, and thus cannot satisfy the real-time requirement of real-world scenarios or cannot produce interpretable predictions for \textit{transferred adversarial examples} generated to fool a wide spectrum of defense models. In this work, we propose an approach of Jacobian norm with Selective Input Gradient Regularization (J-SIGR), which selectively regularizes gradient-based saliency maps to imitate its interpretable prediction with respect to the input through Jacobian normalization. As such, we achieve the defense of DNNs with both high interpretability and computation efficiency. Finally, we evaluate our method across different architectures against powerful adversarial attacks. Experiments demonstrate that the proposed J-SIGR confers improved robustness against transferred adversarial attacks and shows that the network predictions are easy-interpretable.
Abstract (translated)
URL
https://arxiv.org/abs/2207.13036