Abstract
Standard Unsupervised Domain Adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target but usually requires simultaneous access to both source and target data. Moreover, UDA approaches commonly assume that source and target domains share the same labels space. Yet, these two assumptions are hardly satisfied in real-world scenarios. This paper considers the more challenging Source-Free Open-set Domain Adaptation (SF-OSDA) setting, where both assumptions are dropped. We propose a novel approach for SF-OSDA that exploits the granularity of target-private categories by segregating their samples into multiple unknown classes. Starting from an initial clustering-based assignment, our method progressively improves the segregation of target-private samples by refining their pseudo-labels with the guide of an uncertainty-based sample selection module. Additionally, we propose a novel contrastive loss, named NL-InfoNCELoss, that, integrating negative learning into self-supervised contrastive learning, enhances the model robustness to noisy pseudo-labels. Extensive experiments on benchmark datasets demonstrate the superiority of the proposed method over existing approaches, establishing new state-of-the-art performance. Notably, additional analyses show that our method is able to learn the underlying semantics of novel classes, opening the possibility to perform novel class discovery.
Abstract (translated)
标准无监督领域适应(UDA)旨在将来自标记源域的知识转移到未标记的目标域,通常需要同时访问源和目标数据。此外,UDA方法通常假设源和目标域具有相同的标签空间。然而,在现实场景中,这两个假设很难得到满足。本文考虑了更具有挑战性的源自由开放域适应(SF-OSDA)设置,其中这两个假设都被放弃了。我们提出了一种新颖的方法来处理SF-OSDA,它通过将目标私有类别的样本划分为多个未知类别,利用了目标私有类别的粒度。从基于聚类的初始聚类分配开始,我们的方法通过基于不确定性的样本选择模块逐步改进目标私有样本的划分。此外,我们提出了一个名为NL-InfoNCELoss的新颖对比损失,该损失将负学习融入自监督对比学习,增强了模型的鲁棒性。在基准数据集上的大量实验证明,与现有方法相比,所提出的方法具有优越性,推动了领域最新的性能水平。值得注意的是,进一步的分析表明,我们的方法能够学习新类别的潜在语义,为进行新类发现开辟了道路。
URL
https://arxiv.org/abs/2404.10574