Paper Reading AI Learner

Imperceptible Physical Attack against Face Recognition Systems via LED Illumination Modulation

2023-07-25 07:20:21
Junbin Fang, Canjian Jiang, You Jiang, Puxi Lin, Zhaojie Chen, Yujing Sun, Siu-Ming Yiu, Zoe L. Jiang

Abstract

Although face recognition starts to play an important role in our daily life, we need to pay attention that data-driven face recognition vision systems are vulnerable to adversarial attacks. However, the current two categories of adversarial attacks, namely digital attacks and physical attacks both have drawbacks, with the former ones impractical and the latter one conspicuous, high-computational and inexecutable. To address the issues, we propose a practical, executable, inconspicuous and low computational adversarial attack based on LED illumination modulation. To fool the systems, the proposed attack generates imperceptible luminance changes to human eyes through fast intensity modulation of scene LED illumination and uses the rolling shutter effect of CMOS image sensors in face recognition systems to implant luminance information perturbation to the captured face images. In summary,we present a denial-of-service (DoS) attack for face detection and a dodging attack for face verification. We also evaluate their effectiveness against well-known face detection models, Dlib, MTCNN and RetinaFace , and face verification models, Dlib, FaceNet,and ArcFace.The extensive experiments show that the success rates of DoS attacks against face detection models reach 97.67%, 100%, and 100%, respectively, and the success rates of dodging attacks against all face verification models reach 100%.

Abstract (translated)

尽管人脸识别开始在我们的日常生活中扮演重要的角色,但我们仍需关注数据驱动的人脸识别视觉系统的易受攻击性质。然而,当前两种攻击类型——数字攻击和物理攻击 both 都有缺点,前者不切实际,后者过于显眼,计算量过大且无法执行。为了解决这些问题,我们提出了基于LED照明调制的实际可行、可执行、隐蔽且计算量较小的dversarial攻击。为了欺骗系统,我们提出的攻击通过快速场景LED照明强度调制来产生人类眼睛难以察觉的亮度变化,并在人脸识别系统中使用CMOS图像传感器的卷积快门效应来植入亮度信息干扰捕获的面部图像。总结起来,我们提出了一种用于面部检测和躲避攻击的拒绝服务攻击(DoS)。我们还对著名的面部检测模型Dlib、MTCNN和RetinaFace以及面部验证模型Dlib、FaceNet和ArcFace进行了攻击效果评估。广泛的实验结果表明,针对面部检测模型的拒绝服务攻击成功率高达97.67%,躲避攻击对所有面部验证模型的成功率高达100%。

URL

https://arxiv.org/abs/2307.13294

PDF

https://arxiv.org/pdf/2307.13294.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot