Abstract
Unsupervised domain adaptive (UDA) person re-identification (re-ID) aims to learn identity information from labeled images in source domains and apply it to unlabeled images in a target domain. One major issue with many unsupervised re-identification methods is that they do not perform well relative to large domain variations such as illumination, viewpoint, and occlusions. In this paper, we propose a Synthesis Model Bank (SMB) to deal with illumination variation in unsupervised person re-ID. The proposed SMB consists of several convolutional neural networks (CNN) for feature extraction and Mahalanobis matrices for distance metrics. They are trained using synthetic data with different illumination conditions such that their synergistic effect makes the SMB robust against illumination variation. To better quantify the illumination intensity and improve the quality of synthetic images, we introduce a new 3D virtual-human dataset for GAN-based image synthesis. From our experiments, the proposed SMB outperforms other synthesis methods on several re-ID benchmarks.
Abstract (translated)
无监督领域自适应(UDA)的人重识别(re-ID)旨在从源 domains 中的标记图像中学习身份信息,并将其应用于目标 domains 中的未标记图像。许多无监督重识别方法的一个主要问题是它们在大型领域变化方面的表现不如其他变化因素如光照、视角和遮挡。在本文中,我们提出了一种合成模型库(SMB),以处理无监督的人重识别光照变化。该 proposed SMB 由多个卷积神经网络(CNN)用于特征提取和马氏矩阵用于距离度量。使用不同的合成数据进行训练,使其协同作用使得 SMB 对光照变化具有鲁棒性。为了更准确地量化光照强度并提高合成图像的质量,我们介绍了基于GAN的图像合成新的三维虚拟人类数据集。从我们的实验中,该 proposed SMB 在多个重识别基准上比其他合成方法表现更好。
URL
https://arxiv.org/abs/2301.09702