Paper Reading AI Learner

DALLMi: Domain Adaption for LLM-based Multi-label Classifier

2024-05-03 07:04:26
Miruna Beţianu, Abele Mălan, Marco Aldinucci, Robert Birke, Lydia Chen

Abstract

Large language models (LLMs) increasingly serve as the backbone for classifying text associated with distinct domains and simultaneously several labels (classes). When encountering domain shifts, e.g., classifier of movie reviews from IMDb to Rotten Tomatoes, adapting such an LLM-based multi-label classifier is challenging due to incomplete label sets at the target domain and daunting training overhead. The existing domain adaptation methods address either image multi-label classifiers or text binary classifiers. In this paper, we design DALLMi, Domain Adaptation Large Language Model interpolator, a first-of-its-kind semi-supervised domain adaptation method for text data models based on LLMs, specifically BERT. The core of DALLMi is the novel variation loss and MixUp regularization, which jointly leverage the limited positively labeled and large quantity of unlabeled text and, importantly, their interpolation from the BERT word embeddings. DALLMi also introduces a label-balanced sampling strategy to overcome the imbalance between labeled and unlabeled data. We evaluate DALLMi against the partial-supervised and unsupervised approach on three datasets under different scenarios of label availability for the target domain. Our results show that DALLMi achieves higher mAP than unsupervised and partially-supervised approaches by 19.9% and 52.2%, respectively.

Abstract (translated)

大语言模型(LLMs)越来越多地成为用于对特定领域分类文本并根据多个标签(类别)进行分类的骨架。在遇到领域变化时,例如将IMDb上的电影评论分类到Rotten Tomatoes,基于LLM的跨域多标签分类器在目标领域和多个标签(类别)的情况下进行调整是非常具有挑战性的,因为目标领域的标签集不完整,训练开销巨大。现有的领域迁移方法要么是图像多标签分类器,要么是文本二分类器。在本文中,我们设计了一个第一性的基于LLM的半监督领域迁移方法——DALLMi,一种基于LLM的文本数据模型的第一性的半监督领域迁移方法,特别是BERT。DALLMi的核心是新颖的变体损失和MixUp正则化,它们共同利用有限的正例标签和大量未标记文本,以及它们从BERT词向量之间的插值,同时引入了标签平衡抽样策略,以克服目标领域中标签和未标记数据之间的不平衡。我们在三个数据集的不同场景下,对目标领域进行半监督和无监督方法进行了评估。我们的结果表明,DALLMi在半监督和无监督方法的基础上分别实现了19.9%和52.2%的mAP提升。

URL

https://arxiv.org/abs/2405.01883

PDF

https://arxiv.org/pdf/2405.01883.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot