Abstract
The use of transfer learning methods is largely responsible for the present breakthrough in Natural Learning Processing (NLP) tasks across multiple domains. In order to solve the problem of sentiment detection, we examined the performance of four different types of well-known state-of-the-art transformer models for text classification. Models such as Bidirectional Encoder Representations from Transformers (BERT), Robustly Optimized BERT Pre-training Approach (RoBERTa), a distilled version of BERT (DistilBERT), and a large bidirectional neural network architecture (XLNet) were proposed. The performance of the four models that were used to detect disaster in the text was compared. All the models performed well enough, indicating that transformer-based models are suitable for the detection of disaster in text. The RoBERTa transformer model performs best on the test dataset with a score of 82.6% and is highly recommended for quality predictions. Furthermore, we discovered that the learning algorithms' performance was influenced by the pre-processing techniques, the nature of words in the vocabulary, unbalanced labeling, and the model parameters.
Abstract (translated)
使用转移学习方法在很大程度上导致了多个领域的自然学习处理任务的突破。为了解决情感检测问题,我们对四种著名的Transformer模型文本分类的性能进行了比较。例如,提出了Bidirectional Encoder Representations from Transformers (BERT)、 robustly optimized BERT Pre-training Approach (RoBERTa)、从BERT中提取的蒸馏版本(DistilBERT)以及大型双向神经网络架构(XLNet)。用于检测文本灾难的四个模型的性能进行了比较。所有模型表现都足够好,这表明基于Transformer的模型适合在文本中检测灾难。RoBERTa Transformer模型在测试数据集上表现最佳,得分为82.6%,并强烈推荐用于高质量的预测。此外,我们发现,预处理技术、词汇表中的单词性质、不平衡标签以及模型参数会影响学习算法的性能。
URL
https://arxiv.org/abs/2303.07292