Abstract
This study is part of the debate on the efficiency of large versus small language models for text classification by prompting.We assess the performance of small language models in zero-shot text classification, challenging the prevailing dominance of large models.Across 15 datasets, our investigation benchmarks language models from 77M to 40B parameters using different architectures and scoring functions. Our findings reveal that small models can effectively classify texts, getting on par with or surpassing their larger counterparts.We developed and shared a comprehensive open-source repository that encapsulates our methodologies. This research underscores the notion that bigger isn't always better, suggesting that resource-efficient small models may offer viable solutions for specific data classification challenges.
Abstract (translated)
本研究是关于大型语言模型与小型语言模型在文本分类任务中的效率辩论的一部分。我们通过提示评估了小型语言模型的零散文本分类表现,挑战了现有大型模型的主导地位。在15个数据集上,我们使用不同的架构和评分函数评估了语言模型的性能,从77M到40B参数。我们的研究结果表明,小型模型可以有效地分类文本,与大型模型相当或者超越它们。我们还开发并共享了一个全面的开源仓库,封装了我们的方法论。这项研究强调了一个事实,即并不总是越大越好,这表明资源高效的小型模型可能为特定的数据分类挑战提供可行的解决方案。
URL
https://arxiv.org/abs/2404.11122