Abstract
The goal of generality in machine learning is to achieve excellent performance on various unseen tasks and domains. Recently, self-supervised learning (SSL) has been regarded as an effective method to achieve this goal. It can learn high-quality representations from unlabeled data and achieve promising empirical performance on multiple downstream tasks. Existing SSL methods mainly constrain generality from two aspects: (i) large-scale training data, and (ii) learning task-level shared knowledge. However, these methods lack explicit modeling of the SSL generality in the learning objective, and the theoretical understanding of SSL's generality remains limited. This may cause SSL models to overfit in data-scarce situations and generalize poorly in the real world, making it difficult to achieve true generality. To address these issues, we provide a theoretical definition of generality in SSL and define a $\sigma$-measurement to help quantify it. Based on this insight, we explicitly model generality into self-supervised learning and further propose a novel SSL framework, called GeSSL. It introduces a self-motivated target based on $\sigma$-measurement, which enables the model to find the optimal update direction towards generality. Extensive theoretical and empirical evaluations demonstrate the superior performance of the proposed GeSSL.
Abstract (translated)
泛化在机器学习中的目标是实现对各种未见任务和领域的卓越性能。近年来,自监督学习(SSL)被认为是实现这一目标的有效方法。它可以从未标记数据中学习高质量表示,并在多个下游任务上实现鼓舞人心的实证性能。现有的SSL方法主要从两个方面约束泛化:(i)大规模训练数据,(ii)学习任务级别共享知识。然而,这些方法在学习目标中没有明确建模SSL的泛化,而SSL的泛化理论理解仍然有限。这可能导致在数据稀疏情况下,SSL模型过拟合,并且在现实世界中表现不佳,使得实现真正的泛化变得困难。为了解决这些问题,我们提供了SSL中泛化的理论定义,并定义了一个$\sigma$度量来帮助度量它。基于这个洞见,我们明确地将泛化建模到自监督学习之中,并进一步提出了名为GeSSL的新SSL框架。它引入了一个基于$\sigma$度量的自激励目标,使模型能够找到向泛化最优更新方向的优化方向。大量的理论化和实证评价证明了所提出的GeSSL具有卓越的性能。
URL
https://arxiv.org/abs/2405.01053