Abstract
Vision Transformers (ViTs) are becoming a very popular paradigm for vision tasks as they achieve state-of-the-art performance on image classification. However, although early works implied that this network structure had increased robustness against adversarial attacks, some works argue ViTs are still vulnerable. This paper presents our first attempt toward detecting adversarial attacks during inference time using the network's input and outputs as well as latent features. We design four quantifications (or derivatives) of input, output, and latent vectors of ViT-based models that provide a signature of the inference, which could be beneficial for the attack detection, and empirically study their behavior over clean samples and adversarial samples. The results demonstrate that the quantifications from input (images) and output (posterior probabilities) are promising for distinguishing clean and adversarial samples, while latent vectors offer less discriminative power, though they give some insights on how adversarial perturbations work.
Abstract (translated)
视觉转换器(ViTs)正在成为视觉任务中非常流行的范式,因为它们在图像分类方面实现了最先进的性能。然而,尽管早期研究表明这种网络结构已经增强了对dversarial攻击的鲁棒性,但一些工作指出ViTs仍然容易受到攻击。本文介绍了我们第一次尝试在推理时间中检测dversarial攻击的尝试,利用网络的输入和输出以及隐态特征。我们设计了ViT-based模型的四个量化(或退化)输入、输出和隐态向量,提供了推理的签名,可能对攻击检测有益,并 empirical 研究它们对干净样本和dversarial样本的行为。结果表明,输入(图像)和输出(后验概率)的量化对于区分干净和dversarial样本是有前途的,而隐态向量提供较少的辨别能力,尽管它们提供了一些dversarial干扰工作原理的见解。
URL
https://arxiv.org/abs/2301.13356