Abstract
Large Language Models (LLMs) operating in 0-shot or few-shot settings achieve competitive results in Text Classification tasks. In-Context Learning (ICL) typically achieves better accuracy than the 0-shot setting, but it pays in terms of efficiency, due to the longer input prompt. In this paper, we propose a strategy to make LLMs as efficient as 0-shot text classifiers, while getting comparable or better accuracy than ICL. Our solution targets the low resource setting, i.e., when only 4 examples per class are available. Using a single LLM and few-shot real data we perform a sequence of generation, filtering and Parameter-Efficient Fine-Tuning steps to create a robust and efficient classifier. Experimental results show that our approach leads to competitive results on multiple text classification datasets.
Abstract (translated)
大语言模型(LLMs)在零或少数样本设置中实现文本分类任务的竞争结果。在上下文学习(ICL)中,通常比零样本设置获得更好的准确率,但代价是效率较低,因为输入提示较长。在本文中,我们提出了一种使LLMs与零样本文本分类器一样高效,同时获得与ICL相当或更好的准确性的策略。我们的解决方案针对资源低下的情况,即只有每个类别4个示例可用。使用单个LLM和少样本真实数据,我们进行了一系列生成、筛选和参数高效的微调步骤,以创建一个稳健且高效的分类器。实验结果表明,我们的方法在多个文本分类数据集上实现了竞争力的结果。
URL
https://arxiv.org/abs/2404.02422