Abstract
We present Label Space Reduction (LSR), a novel method for improving zero-shot classification performance of Large Language Models (LLMs). LSR iteratively refines the classification label space by systematically ranking and reducing candidate classes, enabling the model to concentrate on the most relevant options. By leveraging unlabeled data with the statistical learning capabilities of data-driven models, LSR dynamically optimizes the label space representation at test time. Our experiments across seven benchmarks demonstrate that LSR improves macro-F1 scores by an average of 7.0% (up to 14.2%) with Llama-3.1-70B and 3.3% (up to 11.1%) with Claude-3.5-Sonnet compared to standard zero-shot classification baselines. To reduce the computational overhead of LSR, which requires an additional LLM call at each iteration, we propose distilling the model into a probabilistic classifier, allowing for efficient inference.
Abstract (translated)
我们提出了一种新颖的方法——标签空间缩减(LSR),用于提升大型语言模型(LLMs)的零样本分类性能。LSR通过系统地对候选类别进行排序和减少,迭代优化分类标签空间,使模型能够专注于最相关的选项。利用未标记数据与基于数据驱动模型的统计学习能力,LSR能够在测试时动态优化标签空间表示。 我们在七个基准测试中进行了实验,结果表明,相较于标准零样本分类基线方法,在使用Llama-3.1-70B时,LSR使宏平均F1分数提高了平均7.0%(最高提升达14.2%),在使用Claude-3.5-Sonnet时则提升了平均3.3%(最多可达11.1%)。 为了降低LSR的计算开销——每次迭代需要额外调用一次LLM,我们建议将模型蒸馏成一个概率分类器,从而实现高效的推理。
URL
https://arxiv.org/abs/2502.08436