Abstract
Face Image Quality Assessment (FIQA) techniques have seen steady improvements over recent years, but their performance still deteriorates if the input face samples are not properly aligned. This alignment sensitivity comes from the fact that most FIQA techniques are trained or designed using a specific face alignment procedure. If the alignment technique changes, the performance of most existing FIQA techniques quickly becomes suboptimal. To address this problem, we present in this paper a novel knowledge distillation approach, termed AI-KD that can extend on any existing FIQA technique, improving its robustness to alignment variations and, in turn, performance with different alignment procedures. To validate the proposed distillation approach, we conduct comprehensive experiments on 6 face datasets with 4 recent face recognition models and in comparison to 7 state-of-the-art FIQA techniques. Our results show that AI-KD consistently improves performance of the initial FIQA techniques not only with misaligned samples, but also with properly aligned facial images. Furthermore, it leads to a new state-of-the-art, when used with a competitive initial FIQA approach. The code for AI-KD is made publicly available from: this https URL.
Abstract (translated)
近年来,面部图像质量评估(FIQA)技术一直在稳步改进,但只要输入的面部样本没有正确对齐,它们的性能就会恶化。这种对齐敏感性源于大多数FIQA技术都是通过特定的面部对齐方法进行训练或设计的。如果对齐技术发生变化,大多数现有FIQA技术的性能会迅速变得次优。为解决这个问题,本文提出了一种新颖的蒸馏方法,称为AI-KD,可以扩展任何现有的FIQA技术,提高其对对齐变化的一致性和相应地提高性能。为了验证所提出的蒸馏方法,我们在6个面部数据集上与4个最近的面部识别模型进行了全面的实验,并将其与7个最先进的FIQA技术进行了比较。我们的结果表明,AI-KD不仅能够提高错位样本的FIQA技术的性能,还能够正确对齐面部图像的FIQA技术。此外,当与竞争性的初始FIQA方法结合时,它导致了一种新的最先进的FIQA技术。AI-KD的代码已公开发布在这个链接:https://this URL。
URL
https://arxiv.org/abs/2404.09555