Abstract
In unconstrained scenarios, face recognition and person re-identification are subject to distortions such as motion blur, atmospheric turbulence, or upsampling artifacts. To improve robustness in these scenarios, we propose a methodology called Distortion-Adaptive Learned Invariance for Identification (DaliID) models. We contend that distortion augmentations, which degrade image quality, can be successfully leveraged to a greater degree than has been shown in the literature. Aided by an adaptive weighting schedule, a novel distortion augmentation is applied at severe levels during training. This training strategy increases feature-level invariance to distortions and decreases domain shift to unconstrained scenarios. At inference, we use a magnitude-weighted fusion of features from parallel models to retain robustness across the range of images. DaliID models achieve state-of-the-art (SOTA) for both face recognition and person re-identification on seven benchmark datasets, including IJB-S, TinyFace, DeepChange, and MSMT17. Additionally, we provide recaptured evaluation data at a distance of 750+ meters and further validate on real long-distance face imagery.
Abstract (translated)
在无约束场景中,人脸识别和人名重识别会受到诸如运动模糊、大气湍流或超采样失真等影响。为了提高在这些场景中的鲁棒性,我们提出了一种方法,称为“失真自适应学习变异性识别模型”(DaliID)。我们声称,失真增强剂,会降低图像质量,比文献中显示出的要成功地利用得多。借助自适应权重计划,在训练期间,一种新的失真增强剂被应用到严重的水平上。这种训练策略增加了特征级别的不变性,并减少了向无约束场景的域转移。在推理中,我们使用并行模型中特征的量级加权融合来保留在整个图像范围内的鲁棒性。DaliID模型在包括IJB-S、tinyFace、DeepChange和 MSMT17七个基准数据集上实现了人脸识别和人名重识别的最先进的性能(SOTA)。此外,我们还提供了距离超过750米重新捕获的评价数据,并进一步验证了真实的长距离人脸图像。
URL
https://arxiv.org/abs/2302.05753