Paper Reading AI Learner

Geometry-Aware Generation of Adversarial Point Clouds

2020-05-14 09:07:45
Yuxin Wen, Jiehong Lin, Ke Chen, C. L. Philip Chen, Kui Jia

Abstract

Machine learning models are shown to be vulnerable to adversarial examples. While most of the existing methods for adversarial attack and defense work on 2D image domains, a few recent ones attempt to extend the studies to 3D data of point clouds. However, adversarial results obtained by these methods typically contain point outliers, which are both noticeable and easier to be defended by simple techniques of outlier removal. Motivated by the different mechanisms when humans perceive 2D images and 3D shapes, we propose in this paper a new design of geometry-aware objectives, whose solutions favor (discrete versions of) the desired surface properties of smoothness and fairness. To generate adversarial point clouds, we use a misclassification loss of targeted attack that supports continuous pursuing of more malicious signals. Regularizing the targeted attack loss with our proposed geometry-aware objectives gives our proposed method of Geometry-Aware Adversarial Attack ($GeoA^3$). Results of $GeoA^3$ tend to be more adversarial, arguably less defendable, and of the key adversarial characterization of being imperceptible to humans. While the main focus of this paper is to learn to generate adversarial point clouds, we also present a simple but effective algorithm termed Iterative Tangent Jittering (IterTanJit), in order to preserve surface-level adversarial effects when re-sampling point clouds from the surface meshes reconstructed from adversarial point clouds. We quantitatively evaluate our methods on both synthetic and physical object models in terms of attack success rate and geometric regularity. For qualitative evaluation, we conduct subjective studies by collecting human preferences from Amazon Mechanical Turk. Comparative results in comprehensive experiments confirm the advantages of our proposed methods over existing ones. We make our source codes publicly available.

Abstract (translated)

URL

https://arxiv.org/abs/1912.11171

PDF

https://arxiv.org/pdf/1912.11171.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot