Abstract
In this paper, we propose Hard Person Identity Mining (HPIM) that attempts to refine the hard example mining to improve the exploration efficacy in person re-identification. It is motivated by following observation: the more attributes some people share, the more difficult to separate their identities. Based on this observation, we develop HPIM via a transferred attribute describer, a deep multi-attribute classifier trained from the source noisy person attribute datasets. We encode each image into the attribute probabilistic description in the target person re-ID dataset. Afterwards in the attribute code space, we consider each person as a distribution to generate his view-specific attribute codes in different practical scenarios. Hence we estimate the person-specific statistical moments from zeroth to higher order, which are further used to calculate the central moment discrepancies between persons. Such discrepancy is a ground to choose hard identity to organize proper mini-batches, without concerning the person representation changing in metric learning. It presents as a complementary tool of hard example mining, which helps to explore the global instead of the local hard example constraint in the mini-batch built by randomly sampled identities. Extensive experiments on two person re-identification benchmarks validated the effectiveness of our proposed algorithm.
Abstract (translated)
本文提出了硬人识别挖掘(HPIM),试图对硬人识别挖掘进行改进,以提高挖掘效率。这是由以下观察得出的:一些人共享的属性越多,就越难分离他们的身份。基于这一观察,我们通过一个传递属性描述器(transferred attribute describer)开发了hpim,这是一个从噪声源人属性数据集中训练出来的深度多属性分类器。我们将每个图像编码为目标人物重新识别数据集中的属性概率描述。然后在属性代码空间中,我们将每个人作为一个分布,在不同的实际场景中生成特定于视图的属性代码。因此,我们估计了从零到高阶的特定于人的统计时刻,这进一步用于计算人与人之间的中心时刻差异。这种差异是在不考虑度量学习中人的表征变化的情况下,选择难识别的对象来组织合适的小批量的基础。它作为硬实例挖掘的一个补充工具,有助于探索由随机抽样标识生成的小批量中的全局而不是局部硬实例约束。对两人重新识别基准的大量实验验证了该算法的有效性。
URL
https://arxiv.org/abs/1905.02102