Abstract
The extensive utilization of biometric authentication systems have emanated attackers / imposters to forge user identity based on morphed images. In this attack, a synthetic image is produced and merged with genuine. Next, the resultant image is user for authentication. Numerous deep neural convolutional architectures have been proposed in literature for face Morphing Attack Detection (MADs) to prevent such attacks and lessen the risks associated with them. Although, deep learning models achieved optimal results in terms of performance, it is difficult to understand and analyse these networks since they are black box/opaque in nature. As a consequence, incorrect judgments may be made. There is, however, a dearth of literature that explains decision-making methods of black box deep learning models for biometric Presentation Attack Detection (PADs) or MADs that can aid the biometric community to have trust in deep learning-based biometric systems for identification and authentication in various security applications such as border control, criminal database establishment etc. In this work, we present a novel visual explanation approach named Ensemble XAI integrating Saliency maps, Class Activation Maps (CAM) and Gradient-CAM (Grad-CAM) to provide a more comprehensive visual explanation for a deep learning prognostic model (EfficientNet-B1) that we have employed to predict whether the input presented to a biometric authentication system is morphed or genuine. The experimentations have been performed on three publicly available datasets namely Face Research Lab London Set, Wide Multi-Channel Presentation Attack (WMCA), and Makeup Induced Face Spoofing (MIFS). The experimental evaluations affirms that the resultant visual explanations highlight more fine-grained details of image features/areas focused by EfficientNet-B1 to reach decisions along with appropriate reasoning.
Abstract (translated)
大量利用生物识别 authentication 系统导致了攻击者/伪造者基于变形图像Forge 用户身份。在这种类型的攻击中,合成图像被产生并与真品合并。随后,该图像被用作用户身份验证。在许多文献中,提出了许多深度神经网络卷积架构,以预防面部变形攻击(MADs),以减少与之相关的风险。虽然深度学习模型在性能方面取得了最佳结果,但由于它们是黑盒/不透明的,因此难以理解和分析这些网络。因此,可能会出现错误的判断。然而,文献中缺乏解释黑盒深度学习模型用于生物识别演示攻击检测(PADs)或Mads的决策方法的书籍,这可以帮助生物识别社区相信基于深度学习的生物识别系统,用于身份验证和安全检查,如边境控制、犯罪数据库建立等。在本文中,我们提出了一种名为Ensemble XAI的新视觉解释方法,该方法集成了亮度图、类激活图(CAM)和梯度CAM(Grad-CAM),为我们所使用的深度学习预测模型(EfficientNet-B1)提供了更全面的视觉解释。实验在三个公开数据集上进行了实施,分别是Face Research Lab伦敦组数据集、 wide 多通道演示攻击(WMCA)和化妆导致的面部欺骗(MIFS)。实验评估确认,结果的视觉解释突出了EfficientNet-B1关注的图像特征/区域的更精细细节,以通过适当的推理做出决策。
URL
https://arxiv.org/abs/2304.14509