Abstract
Cross-lingual named entity recognition (NER) aims to train an NER system that generalizes well to a target language by leveraging labeled data in a given source language. Previous work alleviates the data scarcity problem by translating source-language labeled data or performing knowledge distillation on target-language unlabeled data. However, these methods may suffer from label noise due to the automatic labeling process. In this paper, we propose CoLaDa, a Collaborative Label Denoising Framework, to address this problem. Specifically, we first explore a model-collaboration-based denoising scheme that enables models trained on different data sources to collaboratively denoise pseudo labels used by each other. We then present an instance-collaboration-based strategy that considers the label consistency of each token's neighborhood in the representation space for denoising. Experiments on different benchmark datasets show that the proposed CoLaDa achieves superior results compared to previous methods, especially when generalizing to distant languages.
Abstract (translated)
跨语言命名实体识别(NER)的目标是训练一种能够对给定源语言中的标记数据进行良好泛化的NER系统,通过利用标记数据来利用该源语言的标注数据。以前的工作通过翻译源语言的标注数据或对目标语言的未标注数据进行知识蒸馏来缓解数据缺乏的问题。然而,这些方法可能因为自动标注过程而产生标签噪声。在本文中,我们提出了CoLaDa,一个协作标签去噪框架,以解决这个问题。具体来说,我们首先探索了一种基于模型协作的去噪方案,该方案可以使基于不同数据源训练的模型协作去噪彼此使用的伪标签。然后我们提出了一种基于实例协作的策略,该策略考虑每个 token 的邻居标签一致性在表示空间中的去噪。在不同基准数据集上的实验表明,与以前的方法相比,提出的CoLaDa取得了更好的结果,特别是在推广到远距离语言时。
URL
https://arxiv.org/abs/2305.14913