Abstract
Generic Face Image Quality Assessment (GFIQA) evaluates the perceptual quality of facial images, which is crucial in improving image restoration algorithms and selecting high-quality face images for downstream tasks. We present a novel transformer-based method for GFIQA, which is aided by two unique mechanisms. First, a Dual-Set Degradation Representation Learning (DSL) mechanism uses facial images with both synthetic and real degradations to decouple degradation from content, ensuring generalizability to real-world scenarios. This self-supervised method learns degradation features on a global scale, providing a robust alternative to conventional methods that use local patch information in degradation learning. Second, our transformer leverages facial landmarks to emphasize visually salient parts of a face image in evaluating its perceptual quality. We also introduce a balanced and diverse Comprehensive Generic Face IQA (CGFIQA-40k) dataset of 40K images carefully designed to overcome the biases, in particular the imbalances in skin tone and gender representation, in existing datasets. Extensive analysis and evaluation demonstrate the robustness of our method, marking a significant improvement over prior methods.
Abstract (translated)
通用面部图像质量评估(GFIQA)评估了面部图像的感知质量,这对于改进图像修复算法和为下游任务选择高质量面部图像至关重要。我们提出了一个基于Transformer的新型GFIQA方法,该方法得益于两个独特的机制。首先,双集退化表示学习(DSL)机制利用既有合成又有真实降解的面部图像,将降解与内容分离,确保对真实世界场景的泛化能力。这种自监督方法在全局范围内学习降解特征,为传统使用局部补信息进行降解学习的方法提供了稳健的替代方案。其次,我们的Transformer利用面部特征点强调面部图像评估其感知质量的视觉显着部分。我们还引入了一个平衡和多样性的全面通用面部 IQA(CGFIQA-40k)数据集,40K个经过精心设计的图像,旨在克服现有数据集中皮肤色调和性别表示方面的偏差。广泛的分析和评估证明了我们的方法的稳健性,标志着与之前方法相比取得了显著的改进。
URL
https://arxiv.org/abs/2406.09622