Abstract
The memory dictionary-based contrastive learning method has achieved remarkable results in the field of unsupervised person Re-ID. However, The method of updating memory based on all samples does not fully utilize the hardest sample to improve the generalization ability of the model, and the method based on hardest sample mining will inevitably introduce false-positive samples that are incorrectly clustered in the early stages of the model. Clustering-based methods usually discard a significant number of outliers, leading to the loss of valuable information. In order to address the issues mentioned before, we propose an adaptive intra-class variation contrastive learning algorithm for unsupervised Re-ID, called AdaInCV. And the algorithm quantitatively evaluates the learning ability of the model for each class by considering the intra-class variations after clustering, which helps in selecting appropriate samples during the training process of the model. To be more specific, two new strategies are proposed: Adaptive Sample Mining (AdaSaM) and Adaptive Outlier Filter (AdaOF). The first one gradually creates more reliable clusters to dynamically refine the memory, while the second can identify and filter out valuable outliers as negative samples.
Abstract (translated)
基于记忆字典的对比学习方法在无监督的人体Re-ID领域取得了显著的成果。然而,基于所有样本更新的记忆方法并没有充分利用最难的样本来提高模型的泛化能力,而基于最难样本挖掘的方法可能会引入错误聚类的早期阶段的假阳性样本。聚类方法通常会舍弃大量的异常值,导致重要信息的丢失。为了解决前面提到的问题,我们提出了一个自适应类内变异对比学习算法,称为AdaInCV。这个算法通过考虑聚类后的类内变化来定量评估模型每个类的学习能力,有助于在模型训练过程中选择合适的样本。具体来说,我们提出了两种新的策略:自适应样本挖掘(AdaSaM)和自适应异常滤波(AdaOF)。第一种策略逐渐创建更有信心的聚类以动态优化记忆,而第二种策略可以识别并过滤出有价值的异常作为负样本。
URL
https://arxiv.org/abs/2404.04665