Abstract
In recent years, semi-supervised learning (SSL) has gained significant attention due to its ability to leverage both labeled and unlabeled data to improve model performance, especially when labeled data is scarce. However, most current SSL methods rely on heuristics or predefined rules for generating pseudo-labels and leveraging unlabeled data. They are limited to exploiting loss functions and regularization methods within the standard norm. In this paper, we propose a novel Reinforcement Learning (RL) Guided SSL method, RLGSSL, that formulates SSL as a one-armed bandit problem and deploys an innovative RL loss based on weighted reward to adaptively guide the learning process of the prediction model. RLGSSL incorporates a carefully designed reward function that balances the use of labeled and unlabeled data to enhance generalization performance. A semi-supervised teacher-student framework is further deployed to increase the learning stability. We demonstrate the effectiveness of RLGSSL through extensive experiments on several benchmark datasets and show that our approach achieves consistent superior performance compared to state-of-the-art SSL methods.
Abstract (translated)
近年来,半监督学习(SSL)因能够利用标记数据和未标记数据来提高模型性能而受到广泛关注。然而,大多数当前的SSL方法都依赖于规则或预定义的算法生成伪标签,并利用未标记数据。它们只能在标准正则约束内利用损失函数和正则化方法。在本文中,我们提出了一种新颖的强化学习(RL)引导的SSL方法,RLGSSL,将SSL表示为带有一只手边的带约束的动态规划问题,并部署了一种基于加权奖励的创新型RL损失以适应性地指导预测模型的学习过程。RLGSSL包括一个精心设计的奖励函数,可以平衡使用标记数据和未标记数据来提高泛化性能。还进一步部署了一个半监督的教师-学生框架以增加学习稳定性。通过在多个基准数据集上的广泛实验,我们证明了RLGSSL的有效性,并表明我们的方法与最先进的SSL方法相比具有 consistently superior performance。
URL
https://arxiv.org/abs/2405.01760