Abstract
Pretrained vision-language models (VLMs) like CLIP have shown impressive generalization performance across various downstream tasks, yet they remain vulnerable to adversarial attacks. While prior research has primarily concentrated on improving the adversarial robustness of image encoders to guard against attacks on images, the exploration of text-based and multimodal attacks has largely been overlooked. In this work, we initiate the first known and comprehensive effort to study adapting vision-language models for adversarial robustness under the multimodal attack. Firstly, we introduce a multimodal attack strategy and investigate the impact of different attacks. We then propose a multimodal contrastive adversarial training loss, aligning the clean and adversarial text embeddings with the adversarial and clean visual features, to enhance the adversarial robustness of both image and text encoders of CLIP. Extensive experiments on 15 datasets across two tasks demonstrate that our method significantly improves the adversarial robustness of CLIP. Interestingly, we find that the model fine-tuned against multimodal adversarial attacks exhibits greater robustness than its counterpart fine-tuned solely against image-based attacks, even in the context of image attacks, which may open up new possibilities for enhancing the security of VLMs.
Abstract (translated)
预训练的视觉-语言模型(VLMs)如CLIP在各种下游任务上的表现令人印象深刻,但它们仍然容易受到攻击。虽然先前的研究主要集中在提高图像编码器对图像攻击的抗攻击性以保护其免受攻击,但针对文本和多模态攻击的探索仍然被忽视了。在这项工作中,我们旨在研究将视觉-语言模型适应多模态攻击。首先,我们引入了一种多模态攻击策略,并研究了不同的攻击方式。然后,我们提出了一种多模态对比性 adversarial 训练损失,将清洁和攻击性文本嵌入与攻击性和清洁的视觉特征对齐,以增强 CLIP 图像和文本编码器的对抗性 robustness。在两个任务上的15个数据集的广泛实验证明,我们的方法显著提高了 CLIP 的对抗性 robustness。有趣的是,我们发现,与仅针对图像攻击进行微调相比,将模型微调以对抗多模态攻击表现出更大的稳健性,即使在图像攻击的情况下也是如此,这为增强 VLMs 的安全性提供了新的可能性。
URL
https://arxiv.org/abs/2404.19287