Abstract
Pre-trained transformer models such as BERT have shown massive gains across many text classification tasks. However, these models usually need enormous labeled data to achieve impressive performances. Obtaining labeled data is often expensive and time-consuming, whereas collecting unlabeled data using some heuristics is relatively much cheaper for any task. Therefore, this paper proposes a method that encapsulates reinforcement learning-based text generation and semi-supervised adversarial learning approaches in a novel way to improve the model's performance. Our method READ, Reinforcement-based Adversarial learning, utilizes an unlabeled dataset to generate diverse synthetic text through reinforcement learning, improving the model's generalization capability using adversarial learning. Our experimental results show that READ outperforms the existing state-of-art methods on multiple datasets.
Abstract (translated)
预训练的Transformer模型(如BERT)在许多文本分类任务中展现了巨大的优势。然而,这些模型通常需要大量的标注数据才能达到令人印象深刻的性能水平。获取标注数据往往既昂贵又耗时,而使用一些启发式方法收集未标记的数据对于任何任务来说都相对便宜得多。因此,本文提出了一种创新的方法,该方法将基于强化学习的文本生成和半监督对抗性学习相结合,以提高模型的表现。 我们的方法称为READ(Reinforcement-based Adversarial Learning),它利用未标注的数据集通过强化学习来生成多样化的合成文本,并使用对抗性学习提升模型的泛化能力。实验结果表明,READ在多个数据集上优于现有的最先进方法。
URL
https://arxiv.org/abs/2501.08035