Paper Reading AI Learner

Revisiting the Adversarial Robustness of Vision Language Models: a Multimodal Perspective

2024-04-30 06:34:21
Wanqi Zhou, Shuanghao Bai, Qibin Zhao, Badong Chen

Abstract

Pretrained vision-language models (VLMs) like CLIP have shown impressive generalization performance across various downstream tasks, yet they remain vulnerable to adversarial attacks. While prior research has primarily concentrated on improving the adversarial robustness of image encoders to guard against attacks on images, the exploration of text-based and multimodal attacks has largely been overlooked. In this work, we initiate the first known and comprehensive effort to study adapting vision-language models for adversarial robustness under the multimodal attack. Firstly, we introduce a multimodal attack strategy and investigate the impact of different attacks. We then propose a multimodal contrastive adversarial training loss, aligning the clean and adversarial text embeddings with the adversarial and clean visual features, to enhance the adversarial robustness of both image and text encoders of CLIP. Extensive experiments on 15 datasets across two tasks demonstrate that our method significantly improves the adversarial robustness of CLIP. Interestingly, we find that the model fine-tuned against multimodal adversarial attacks exhibits greater robustness than its counterpart fine-tuned solely against image-based attacks, even in the context of image attacks, which may open up new possibilities for enhancing the security of VLMs.

Abstract (translated)

预训练的视觉-语言模型(VLMs)如CLIP在各种下游任务上的表现令人印象深刻,但它们仍然容易受到攻击。虽然先前的研究主要集中在提高图像编码器对图像攻击的抗攻击性以保护其免受攻击,但针对文本和多模态攻击的探索仍然被忽视了。在这项工作中,我们旨在研究将视觉-语言模型适应多模态攻击。首先,我们引入了一种多模态攻击策略,并研究了不同的攻击方式。然后,我们提出了一种多模态对比性 adversarial 训练损失,将清洁和攻击性文本嵌入与攻击性和清洁的视觉特征对齐,以增强 CLIP 图像和文本编码器的对抗性 robustness。在两个任务上的15个数据集的广泛实验证明,我们的方法显著提高了 CLIP 的对抗性 robustness。有趣的是,我们发现,与仅针对图像攻击进行微调相比,将模型微调以对抗多模态攻击表现出更大的稳健性,即使在图像攻击的情况下也是如此,这为增强 VLMs 的安全性提供了新的可能性。

URL

https://arxiv.org/abs/2404.19287

PDF

https://arxiv.org/pdf/2404.19287.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot