Abstract
This paper reports on the evaluation of Deep Learning (DL) transformer architecture models for Named-Entity Recognition (NER) on ten low-resourced South African (SA) languages. In addition, these DL transformer models were compared to other Neural Network and Machine Learning (ML) NER models. The findings show that transformer models significantly improve performance when applying discrete fine-tuning parameters per language. Furthermore, fine-tuned transformer models outperform other neural network and machine learning models with NER on the low-resourced SA languages. For example, the transformer models generated the highest F-scores for six of the ten SA languages, including the highest average F-score surpassing the Conditional Random Fields ML model. Additional research could evaluate the more recent transformer architecture models on other Natural Language Processing tasks and applications, such as Phrase chunking, Machine Translation, and Part-of-Speech tagging.
Abstract (translated)
URL
https://arxiv.org/abs/2111.00830