Abstract
Although face recognition starts to play an important role in our daily life, we need to pay attention that data-driven face recognition vision systems are vulnerable to adversarial attacks. However, the current two categories of adversarial attacks, namely digital attacks and physical attacks both have drawbacks, with the former ones impractical and the latter one conspicuous, high-computational and inexecutable. To address the issues, we propose a practical, executable, inconspicuous and low computational adversarial attack based on LED illumination modulation. To fool the systems, the proposed attack generates imperceptible luminance changes to human eyes through fast intensity modulation of scene LED illumination and uses the rolling shutter effect of CMOS image sensors in face recognition systems to implant luminance information perturbation to the captured face images. In summary,we present a denial-of-service (DoS) attack for face detection and a dodging attack for face verification. We also evaluate their effectiveness against well-known face detection models, Dlib, MTCNN and RetinaFace , and face verification models, Dlib, FaceNet,and ArcFace.The extensive experiments show that the success rates of DoS attacks against face detection models reach 97.67%, 100%, and 100%, respectively, and the success rates of dodging attacks against all face verification models reach 100%.
Abstract (translated)
尽管人脸识别开始在我们的日常生活中扮演重要的角色,但我们仍需关注数据驱动的人脸识别视觉系统的易受攻击性质。然而,当前两种攻击类型——数字攻击和物理攻击 both 都有缺点,前者不切实际,后者过于显眼,计算量过大且无法执行。为了解决这些问题,我们提出了基于LED照明调制的实际可行、可执行、隐蔽且计算量较小的dversarial攻击。为了欺骗系统,我们提出的攻击通过快速场景LED照明强度调制来产生人类眼睛难以察觉的亮度变化,并在人脸识别系统中使用CMOS图像传感器的卷积快门效应来植入亮度信息干扰捕获的面部图像。总结起来,我们提出了一种用于面部检测和躲避攻击的拒绝服务攻击(DoS)。我们还对著名的面部检测模型Dlib、MTCNN和RetinaFace以及面部验证模型Dlib、FaceNet和ArcFace进行了攻击效果评估。广泛的实验结果表明,针对面部检测模型的拒绝服务攻击成功率高达97.67%,躲避攻击对所有面部验证模型的成功率高达100%。
URL
https://arxiv.org/abs/2307.13294