Abstract
Person re-identification (re-id) remains challenging due to significant intra-class variations across different cameras. Recently, there has been a growing interest in using generative models to augment training data and enhance the invariance to input changes. The generative pipelines in existing methods, however, stay relatively separate from the discriminative re-id learning stages. Accordingly, re-id models are often trained in a straightforward manner on the generated data. In this paper, we seek to improve learned re-id embeddings by better leveraging the generated data. To this end, we propose a joint learning framework that couples re-id learning and data generation end-to-end. Our model involves a generative module that separately encodes each person into an appearance code and a structure code, and a discriminative module that shares the appearance encoder with the generative module. By switching the appearance or structure codes, the generative module is able to generate high-quality cross-id composed images, which are online fed back to the appearance encoder and used to improve the discriminative module. The proposed joint learning framework renders significant improvement over the baseline without using generated data, leading to the state-of-the-art performance on several benchmark datasets.
Abstract (translated)
由于不同摄像机之间存在显著的类内差异,人员重新识别(RE ID)仍然具有挑战性。最近,人们对使用生成模型来增强训练数据和增强输入变化的不变性越来越感兴趣。然而,现有方法中的生成管道与识别性Re-ID学习阶段相对独立。因此,REID模型通常以直接的方式对生成的数据进行培训。在本文中,我们试图通过更好地利用生成的数据来改进所学的REID嵌入。为此,我们提出了一个联合学习框架,将重新识别学习和数据生成端到端结合起来。我们的模型包括一个生成模块,它将每个人分别编码为外观代码和结构代码,以及一个与生成模块共享外观编码器的识别模块。通过切换外观或结构代码,生成模块能够生成高质量的交叉ID合成图像,在线反馈给外观编码器,用于改进识别模块。拟议的联合学习框架在不使用生成数据的情况下显著改善了基线,从而在多个基准数据集上实现了最先进的性能。
URL
https://arxiv.org/abs/1904.07223