Paper Reading AI Learner

Boosting Cross-task Transferability of Adversarial Patches with Visual Relations

2023-04-11 11:43:57
Tony Ma, Songze Li, Yisong Xiao, Shunchang Liu

Abstract

The transferability of adversarial examples is a crucial aspect of evaluating the robustness of deep learning systems, particularly in black-box scenarios. Although several methods have been proposed to enhance cross-model transferability, little attention has been paid to the transferability of adversarial examples across different tasks. This issue has become increasingly relevant with the emergence of foundational multi-task AI systems such as Visual ChatGPT, rendering the utility of adversarial samples generated by a single task relatively limited. Furthermore, these systems often entail inferential functions beyond mere recognition-like tasks. To address this gap, we propose a novel Visual Relation-based cross-task Adversarial Patch generation method called VRAP, which aims to evaluate the robustness of various visual tasks, especially those involving visual reasoning, such as Visual Question Answering and Image Captioning. VRAP employs scene graphs to combine object recognition-based deception with predicate-based relations elimination, thereby disrupting the visual reasoning information shared among inferential tasks. Our extensive experiments demonstrate that VRAP significantly surpasses previous methods in terms of black-box transferability across diverse visual reasoning tasks.

Abstract (translated)

对抗性样本的可移植性是评估深度学习系统鲁棒性的一个重要方面,特别是在黑盒场景下。尽管已经提出了多种方法来增强跨模型可移植性,但很少关注不同任务之间的对抗性样本可移植性。这一问题随着基于视觉任务的 foundation AI系统(如Visual ChatGPT)的出现而变得越来越重要,使得单个任务生成的对抗样本的实用性相对有限。此外,这些系统通常还包括超越仅仅识别相似的任务推理函数。为了解决这一差距,我们提出了一种基于视觉关系的新跨任务对抗块生成方法称为 VRAP,旨在评估各种视觉任务,特别是涉及视觉推理的任务(如视觉问答和图像标题)的鲁棒性。VRAP使用场景图将对象识别基于欺骗与谓词关系消除相结合,从而破坏了推理任务之间共享的视觉推理信息。我们的广泛实验表明,VRAP在黑盒在不同视觉推理任务之间的可移植性方面显著超越了以前的方法。

URL

https://arxiv.org/abs/2304.05402

PDF

https://arxiv.org/pdf/2304.05402.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot