Abstract
This paper presents ViDeBERTa, a new pre-trained monolingual language model for Vietnamese, with three versions - ViDeBERTa_xsmall, ViDeBERTa_base, and ViDeBERTa_large, which are pre-trained on a large-scale corpus of high-quality and diverse Vietnamese texts using DeBERTa architecture. Although many successful pre-trained language models based on Transformer have been widely proposed for the English language, there are still few pre-trained models for Vietnamese, a low-resource language, that perform good results on downstream tasks, especially Question answering. We fine-tune and evaluate our model on three important natural language downstream tasks, Part-of-speech tagging, Named-entity recognition, and Question answering. The empirical results demonstrate that ViDeBERTa with far fewer parameters surpasses the previous state-of-the-art models on multiple Vietnamese-specific natural language understanding tasks. Notably, ViDeBERTa_base with 86M parameters, which is only about 23% of PhoBERT_large with 370M parameters, still performs the same or better results than the previous state-of-the-art model. Our ViDeBERTa models are available at: this https URL.
Abstract (translated)
本论文介绍了ViDeBERTa,一种针对越南语的新预训练单语语言模型,有三个版本:ViDeBERTa_xsmall、ViDeBERTa_base和ViDeBERTa_large,这些模型使用DeBERTa架构在大规模高质量的越南文本库上进行了预训练。虽然基于Transformer的许多成功预训练语言模型已经广泛适用于英语语言,但对于资源有限的越南语来说,仍然只有少数模型能够在后续任务(特别是问答)中表现出色。我们微调和评估了我们的模型的三个重要自然语言后续任务:词性标注、命名实体识别和问答。经验证结果显示,ViDeBERTa参数更少的版本超越了先前最先进的越南语自然语言理解模型。特别是,ViDeBERTa_base参数仅有86M,是PhoBERT_large参数量的约23%,仍然表现出与先前最先进的模型相同的或更好的结果。我们的ViDeBERTa模型可在以下httpsURL上获取:
URL
https://arxiv.org/abs/2301.10439