Abstract
The critical challenge of Semi-Supervised Learning (SSL) is how to effectively leverage the limited labeled data and massive unlabeled data to improve the model's generalization performance. In this paper, we first revisit the popular pseudo-labeling methods via a unified sample weighting formulation and demonstrate the inherent quantity-quality trade-off problem of pseudo-labeling with thresholding, which may prohibit learning. To this end, we propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training, effectively exploiting the unlabeled data. We derive a truncated Gaussian function to weight samples based on their confidence, which can be viewed as a soft version of the confidence threshold. We further enhance the utilization of weakly-learned classes by proposing a uniform alignment approach. In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
Abstract (translated)
半监督学习(SSL)的关键挑战是如何有效地利用有限的标记数据和大量的未标记数据来提高模型的泛化性能。在本文中,我们首先通过一个统一样本加权方案回顾了流行的伪标记方法,并证明了伪标记与阈值相关的数量-质量权衡问题,这可能会阻止学习。为此,我们提出SoftMatch,以在训练期间维持高质量的伪标记数量和高质量的伪标记质量,有效地利用未标记数据。我们推导了一个截断高斯函数来根据样本的可信度来加权样本,这可以被视为可信度阈值的软版本。我们还提出了一种一致性对齐方法,以增强弱学习类的使用。在实验中,SoftMatch在不同基准上表现出显著的改善,包括图像、文本和不平衡分类。
URL
https://arxiv.org/abs/2301.10921