Abstract
Deep learning in general domains has constantly been extended to domain-specific tasks requiring the recognition of fine-grained characteristics. However, real-world applications for fine-grained tasks suffer from two challenges: a high reliance on expert knowledge for annotation and necessity of a versatile model for various downstream tasks in a specific domain (e.g., prediction of categories, bounding boxes, or pixel-wise annotations). Fortunately, the recent self-supervised learning (SSL) is a promising approach to pretrain a model without annotations, serving as an effective initialization for any downstream tasks. Since SSL does not rely on the presence of annotation, in general, it utilizes the large-scale unlabeled dataset, referred to as an open-set. In this sense, we introduce a novel Open-Set Self-Supervised Learning problem under the assumption that a large-scale unlabeled open-set is available, as well as the fine-grained target dataset, during a pretraining phase. In our problem setup, it is crucial to consider the distribution mismatch between the open-set and target dataset. Hence, we propose SimCore algorithm to sample a coreset, the subset of an open-set that has a minimum distance to the target dataset in the latent space. We demonstrate that SimCore significantly improves representation learning performance through extensive experimental settings, including eleven fine-grained datasets and seven open-sets in various downstream tasks.
Abstract (translated)
深度学习在通用领域一直被扩展到需要精确特征识别的特定任务。然而,对于高精度任务的实际应用领域,存在两个挑战:高度依赖专家知识进行标注,以及在特定的特定领域(例如分类、边界框预测或像素级标注)中需要一种多功能模型满足不同下游任务的需求。幸运的是,最近的一种自监督学习(SSL)是一种有前途的方法来在没有标注的情况下进行模型预训练,可以作为任何下游任务的有效初始化。由于SSL并不依赖于标注的存在,通常它使用被称为“开放集”的大型未标记数据集。因此,我们提出了一种新的开放集自监督学习问题,假设有一个大规模的未标记开放集和高精度目标数据集,在预训练阶段出现。在我们的问题设置中,重要的是考虑开放集和目标数据集之间的分布不匹配。因此,我们提出了SimCore算法来采样核心集,它是开放集的子集,在潜在空间中与目标数据集有最小距离。我们证明,SimCore通过广泛的实验设置显著改善了表示学习性能,包括在不同下游任务中的11个高精度数据集和7个开放集。
URL
https://arxiv.org/abs/2303.11101