Abstract
Face recognition has obtained remarkable progress in recent years due to the great improvement of deep convolutional neural networks (CNNs). However, deep CNNs are vulnerable to adversarial examples, which can cause fateful consequences in real-world face recognition applications with security-sensitive purposes. Adversarial attacks are widely studied as they can identify the vulnerability of the models before they are deployed. In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model. This attack setting is more practical in real-world face recognition systems. To improve the efficiency of previous methods, we propose an evolutionary attack algorithm, which can model the local geometries of the search directions and reduce the dimension of the search space. Extensive experiments demonstrate the effectiveness of the proposed method that induces a minimum perturbation to an input face image with fewer queries. We also apply the proposed method to attack a real-world face recognition system successfully.
Abstract (translated)
近年来,随着深卷积神经网络(CNN)的发展,人脸识别取得了显著的进展。然而,深度CNN容易受到敌对的例子的攻击,这可能会在具有安全敏感目的的真实人脸识别应用程序中造成致命的后果。对抗性攻击被广泛研究,因为它们可以在部署前识别模型的脆弱性。本文在基于决策的黑盒攻击环境下,评估了最先进的人脸识别模型的鲁棒性,在这种环境下,攻击者无法访问模型参数和梯度,但只能通过向目标模型发送查询来获取硬标签预测。这种攻击设置在真实的人脸识别系统中更为实用。为了提高现有方法的效率,我们提出了一种进化攻击算法,可以对搜索方向的局部几何进行建模,并减小搜索空间的维数。大量的实验证明了该方法的有效性,该方法通过较少的查询对输入人脸图像产生最小的扰动。并将该方法应用于实际人脸识别系统的攻击。
URL
https://arxiv.org/abs/1904.04433