Abstract
Recently, contrastive self-supervised learning, where the proximity of representations is determined based on the identities of samples, has made remarkable progress in unsupervised representation learning. SimSiam is a well-known example in this area, known for its simplicity yet powerful performance. However, it is known to be sensitive to changes in training configurations, such as hyperparameters and augmentation settings, due to its structural characteristics. To address this issue, we focus on the similarity between contrastive learning and the teacher-student framework in knowledge distillation. Inspired by the ensemble-based knowledge distillation approach, the proposed method, EnSiam, aims to improve the contrastive learning procedure using ensemble representations. This can provide stable pseudo labels, providing better performance. Experiments demonstrate that EnSiam outperforms previous state-of-the-art methods in most cases, including the experiments on ImageNet, which shows that EnSiam is capable of learning high-quality representations.
Abstract (translated)
最近,对比性自监督学习,其中表示之间的接近度基于样本的身份来确定,在无监督表示学习方面取得了显著的进展。SimSiam是该领域的著名例子,因其简单但强大的性能而闻名于世。然而,由于其结构特性,它 known to be sensitive to changes in training configurations,如超参数和增强设置,因此对训练配置的变化非常敏感。为了解决这一问题,我们关注对比性学习和知识蒸馏中教师和学生框架之间的相似之处。基于群体知识蒸馏方法的想法, proposed method EnSiam旨在改进对比性学习程序,使用群体表示。这可以提供稳定的伪标签,提供更好的性能。实验结果表明,EnSiam在大多数情况下比先前的先进方法表现更好,包括在ImageNet上的实验,这表明EnSiam有能力学习高质量的表示。
URL
https://arxiv.org/abs/2305.13391