Abstract
While performance of many text classification tasks has been recently improved due to Pre-trained Language Models (PLMs), in this paper we show that they still suffer from a performance gap when the underlying distribution of topics changes. For example, a genre classifier trained on \textit{political} topics often fails when tested on documents about \textit{sport} or \textit{medicine}. In this work, we quantify this phenomenon empirically with a large corpus and a large set of topics. Consequently, we verify that domain transfer remains challenging both for classic PLMs, such as BERT, and for modern large models, such as GPT-3. We also suggest and successfully test a possible remedy: after augmenting the training dataset with topically-controlled synthetic texts, the F1 score improves by up to 50\% for some topics, nearing on-topic training results, while others show little to no improvement. While our empirical results focus on genre classification, our methodology is applicable to other classification tasks such as gender, authorship, or sentiment classification. The code and data to replicate the experiments are available at this https URL
Abstract (translated)
虽然由于预训练语言模型(PLMs)的性能最近得到了提高,但当主题分布发生变化时,它们仍然会面临性能差距。例如,在政治主题上进行训练的 genres 分类器在测试体育或医疗方面的文档时常常表现不佳。在我们的研究中,我们用大量的语料库和主题集来定量这一现象。结果表明,对于经典 PLMs(如 BERT)和现代大型模型(如 GPT-3),领域迁移仍然具有挑战性。我们还提出了一个可能的解决方法,并在一些主题上进行了实验,结果表明,通过控制主题的 synthetic 文本,F1 分数可以达到提高至50\%的效果,接近主题训练的结果,而其他主题则没有或几乎没有改善。虽然我们的实证结果集中关注于 genres,但我们的方法可以应用于其他分类任务,如性别、作者或情感分类。实验代码和数据可在此处复制:https:// URL。
URL
https://arxiv.org/abs/2311.16083