Abstract
Self-supervised learning (SSL) is a powerful tool in machine learning, but understanding the learned representations and their underlying mechanisms remains a challenge. This paper presents an in-depth empirical analysis of SSL-trained representations, encompassing diverse models, architectures, and hyperparameters. Our study reveals an intriguing aspect of the SSL training process: it inherently facilitates the clustering of samples with respect to semantic labels, which is surprisingly driven by the SSL objective's regularization term. This clustering process not only enhances downstream classification but also compresses the data information. Furthermore, we establish that SSL-trained representations align more closely with semantic classes rather than random classes. Remarkably, we show that learned representations align with semantic classes across various hierarchical levels, and this alignment increases during training and when moving deeper into the network. Our findings provide valuable insights into SSL's representation learning mechanisms and their impact on performance across different sets of classes.
Abstract (translated)
自监督学习(SSL)是一种强大的机器学习工具,但理解学习表示及其底层机制仍然是一个挑战。本文深入分析了SSL训练表示的方法,涵盖了各种模型、架构和超参数。我们的研究揭示了SSL训练过程令人好奇的特性:它本身促进对语义标签的聚类,这是SSL目标的正则化项意外推动的。这种聚类过程不仅增强了后续分类,还压缩了数据信息。此外,我们确定SSL训练表示更紧密地与语义类别而不是随机类别对齐。非常惊讶地,我们表明, learned representations在不同层级的类别之间对齐,并在训练和更深入网络时增加对齐。我们的发现提供了对SSL表示学习机制的宝贵见解,以及它们对不同类别组表现的影响。
URL
https://arxiv.org/abs/2305.15614