Abstract
Face anti-spoofing (FAS) is indispensable for a face recognition system. Many texture-driven countermeasures were developed against presentation attacks (PAs), but the performance against unseen domains or unseen spoofing types is still unsatisfactory. Instead of exhaustively collecting all the spoofing variations and making binary decisions of live/spoof, we offer a new perspective on the FAS task to distinguish between normal and abnormal movements of live and spoof presentations. We propose Geometry-Aware Interaction Network (GAIN), which exploits dense facial landmarks with spatio-temporal graph convolutional network (ST-GCN) to establish a more interpretable and modularized FAS model. Additionally, with our cross-attention feature interaction mechanism, GAIN can be easily integrated with other existing methods to significantly boost performance. Our approach achieves state-of-the-art performance in the standard intra- and cross-dataset evaluations. Moreover, our model outperforms state-of-the-art methods by a large margin in the cross-dataset cross-type protocol on CASIA-SURF 3DMask (+10.26% higher AUC score), exhibiting strong robustness against domain shifts and unseen spoofing types.
Abstract (translated)
面部反伪造(FAS)对于面部识别系统是必不可少的。许多基于纹理的反伪造措施是针对演示攻击(PAs)而开发的,但针对未知的领域或未知的仿冒类型的表现仍然不够理想。我们提出了一种新的观点,即对面部特征的详细建模,以区分真实的演示和仿冒演示的正常和异常运动。我们提出了Geometry-Aware Interaction Network(GAIN),利用空间时间卷积网络(ST-GCN)密集地建模面部特征,以建立更加可解释和模块化的FAS模型。此外,我们的交叉注意力特征交互机制使我们能够轻松地与其他现有方法集成,以显著增强性能。我们的方法在标准内部和跨数据集评估中实现了最先进的性能。此外,我们在CASIA-SURF 3DMask跨数据集跨类型协议中以显著优势超越了最先进的方法,其AUC得分提高了10.26%,表现出对领域迁移和未知的仿冒类型的强大鲁棒性。
URL
https://arxiv.org/abs/2306.14313