Abstract
This paper proposes a sequence-to-sequence learning approach for Arabic pronoun resolution, which explores the effectiveness of using advanced natural language processing (NLP) techniques, specifically Bi-LSTM and the BERT pre-trained Language Model, in solving the pronoun resolution problem in Arabic. The proposed approach is evaluated on the AnATAr dataset, and its performance is compared to several baseline models, including traditional machine learning models and handcrafted feature-based models. Our results demonstrate that the proposed model outperforms the baseline models, which include KNN, logistic regression, and SVM, across all metrics. In addition, we explore the effectiveness of various modifications to the model, including concatenating the anaphor text beside the paragraph text as input, adding a mask to focus on candidate scores, and filtering candidates based on gender and number agreement with the anaphor. Our results show that these modifications significantly improve the model's performance, achieving up to 81% on MRR and 71% for F1 score while also demonstrating higher precision, recall, and accuracy. These findings suggest that the proposed model is an effective approach to Arabic pronoun resolution and highlights the potential benefits of leveraging advanced NLP neural models.
Abstract (translated)
本论文提出了一种序列到序列学习方法来解决阿拉伯人称代词识别问题,该方法探索了使用高级自然语言处理技术(NLP技术),特别是 Bi-LSTM 和 BERT 预训练语言模型,来解决阿拉伯人称代词识别问题的有效性。该方法在 AnATAr 数据集上进行评估,并与其他基准模型(包括传统机器学习模型和手算特征模型)进行了比较。我们的结果显示, proposed 模型在所有指标上都胜过基准模型,包括 KNN、逻辑回归和 SVM。此外,我们探索了对模型进行各种修改的有效性,包括将人称代词文本放置在段落文本旁边作为输入、添加掩码以专注于候选人得分、并根据性别和数字与人称代词的一致性过滤候选人。我们的结果显示,这些修改显著提高了模型的表现,在准确率上达到 81%,在 MRR 上达到 71%,同时表现出更高的精度、召回率和准确性。这些结果表明, proposed 模型是解决阿拉伯人称代词识别问题的有效方法,并突出了利用高级 NLP 神经网络模型的潜在好处。
URL
https://arxiv.org/abs/2305.11529