Abstract
Pre training of language models on large text corpora is common practice in Natural Language Processing. Following, fine tuning of these models is performed to achieve the best results on a variety of tasks. In this paper we question the common practice of only adding a single output layer as a classification head on top of the network. We perform an AutoML search to find architectures that outperform the current single layer at only a small compute cost. We validate our classification architecture on a variety of NLP benchmarks from the GLUE dataset.
Abstract (translated)
在自然语言处理中,在大文本语料库上对语言模型的预训练是一种常见的做法。接下来,为了在各种任务上获得最佳结果,对这些模型进行微调。在本文中,我们质疑在网络顶部仅添加一个输出层作为分类头的常见做法。我们进行了一系列自动机器学习搜索,以寻找在仅有较小计算成本的情况下超越当前单层输出的架构。我们在GLUE数据集上验证了我们的分类架构。
URL
https://arxiv.org/abs/2403.18547