Abstract
Large language models (LLMs) increasingly serve as the backbone for classifying text associated with distinct domains and simultaneously several labels (classes). When encountering domain shifts, e.g., classifier of movie reviews from IMDb to Rotten Tomatoes, adapting such an LLM-based multi-label classifier is challenging due to incomplete label sets at the target domain and daunting training overhead. The existing domain adaptation methods address either image multi-label classifiers or text binary classifiers. In this paper, we design DALLMi, Domain Adaptation Large Language Model interpolator, a first-of-its-kind semi-supervised domain adaptation method for text data models based on LLMs, specifically BERT. The core of DALLMi is the novel variation loss and MixUp regularization, which jointly leverage the limited positively labeled and large quantity of unlabeled text and, importantly, their interpolation from the BERT word embeddings. DALLMi also introduces a label-balanced sampling strategy to overcome the imbalance between labeled and unlabeled data. We evaluate DALLMi against the partial-supervised and unsupervised approach on three datasets under different scenarios of label availability for the target domain. Our results show that DALLMi achieves higher mAP than unsupervised and partially-supervised approaches by 19.9% and 52.2%, respectively.
Abstract (translated)
大语言模型(LLMs)越来越多地成为用于对特定领域分类文本并根据多个标签(类别)进行分类的骨架。在遇到领域变化时,例如将IMDb上的电影评论分类到Rotten Tomatoes,基于LLM的跨域多标签分类器在目标领域和多个标签(类别)的情况下进行调整是非常具有挑战性的,因为目标领域的标签集不完整,训练开销巨大。现有的领域迁移方法要么是图像多标签分类器,要么是文本二分类器。在本文中,我们设计了一个第一性的基于LLM的半监督领域迁移方法——DALLMi,一种基于LLM的文本数据模型的第一性的半监督领域迁移方法,特别是BERT。DALLMi的核心是新颖的变体损失和MixUp正则化,它们共同利用有限的正例标签和大量未标记文本,以及它们从BERT词向量之间的插值,同时引入了标签平衡抽样策略,以克服目标领域中标签和未标记数据之间的不平衡。我们在三个数据集的不同场景下,对目标领域进行半监督和无监督方法进行了评估。我们的结果表明,DALLMi在半监督和无监督方法的基础上分别实现了19.9%和52.2%的mAP提升。
URL
https://arxiv.org/abs/2405.01883