Abstract
Unsupervised Re-ID methods aim at learning robust and discriminative features from unlabeled data. However, existing methods often ignore the relationship between module parameters of Re-ID framework and feature distributions, which may lead to feature misalignment and hinder the model performance. To address this problem, we propose a dynamic clustering and cluster contrastive learning (DCCC) method. Specifically, we first design a dynamic clustering parameters scheduler (DCPS) which adjust the hyper-parameter of clustering to fit the variation of intra- and inter-class distances. Then, a dynamic cluster contrastive learning (DyCL) method is designed to match the cluster representation vectors' weights with the local feature association. Finally, a label smoothing soft contrastive loss ($L_{ss}$) is built to keep the balance between cluster contrastive learning and self-supervised learning with low computational consumption and high computational efficiency. Experiments on several widely used public datasets validate the effectiveness of our proposed DCCC which outperforms previous state-of-the-art methods by achieving the best performance.
Abstract (translated)
无监督Re-ID方法旨在从未标记数据中学习稳健和有区分的特征。然而,现有方法常常忽略Re-ID框架模块参数和特征分布之间的关系,这可能导致特征不匹配和妨碍模型性能。为了解决这一问题,我们提出了一种动态分组和分组比较学习(DCCC)方法。具体来说,我们首先设计了一个动态分组参数调度器(DCPS),该调度器调整分组的超参数以适应内层和间层距离的变化。然后,我们设计了一种动态分组比较学习(DyCL)方法,该方法匹配分组表示向量的权重与局部特征映射。最后,我们建立了一个标签平滑软比较损失($L_{ss}$),以保持分组比较学习和自监督学习之间的平衡,以减少计算消耗和提高计算效率。对多个广泛使用的公共数据集进行了实验,证明了我们提出的DCCC方法的有效性,该方法通过实现最佳性能而优于先前的先进技术方法。
URL
https://arxiv.org/abs/2303.06810