Abstract
The common practice of preprocessing text before feeding it into NLP models introduces many decision points which have unintended consequences on model performance. In this opinion piece, we focus on the handling of diacritics in texts originating in many languages and scripts. We demonstrate, through several case studies, the adverse effects of inconsistent encoding of diacritized characters and of removing diacritics altogether. We call on the community to adopt simple but necessary steps across all models and toolkits in order to improve handling of diacritized text and, by extension, increase equity in multilingual NLP.
Abstract (translated)
文本在输入NLP模型之前通常会进行预处理,这一过程引入了许多决策点,这些决策点无意中影响了模型的性能。在这篇观点文章中,我们重点关注多语言和多种文字来源文本中的音调符号(diacritics)处理问题。通过几个案例研究,我们展示了不一致编码带音调字符以及完全去除音调符号所带来的负面影响。我们呼吁社区在所有模型和工具包中采用简单但必要的步骤,以改进对带音调文本的处理,并由此提高多语言NLP中的公平性。
URL
https://arxiv.org/abs/2410.24140