Abstract
Distantly-labeled data can be used to scale up training of statistical models, but it is typically noisy and that noise can vary with the distant labeling technique. In this work, we propose a two-stage procedure for handling this type of data: denoise it with a learned model, then train our final model on clean and denoised distant data with standard supervised training. Our denoising approach consists of two parts. First, a filtering function discards examples from the distantly labeled data that are wholly unusable. Second, a relabeling function repairs noisy labels for the retained examples. Each of these components is a model trained on synthetically-noised examples generated from a small manually-labeled set. We investigate this approach on the ultra-fine entity typing task of Choi et al. (2018). Our baseline model is an extension of their model with pre-trained ELMo representations, which already achieves state-of-the-art performance. Adding distant data that has been denoised with our learned models gives further performance gains over this base model, outperforming models trained on raw distant data or heuristically-denoised distant data.
Abstract (translated)
远距离标记的数据可以用来扩大统计模型的训练,但它通常是噪声,噪声可以随着远距离标记技术的变化而变化。在这项工作中,我们提出了一个处理这类数据的两个阶段的程序:用一个学习过的模型对其进行去噪,然后用标准的监督培训对我们的最终模型进行清洁和去噪的远程数据的培训。我们的去噪方法由两部分组成。首先,一个过滤函数将丢弃那些完全不可用的、带有远近标签的数据中的示例。第二,重新标记函数修复保留示例的噪声标签。这些组件中的每一个都是一个模型,它训练了从一个手工标记的小集合中生成的综合噪声示例。我们对Choi等人的超精细实体分型任务进行了研究。(2018年)。我们的基线模型是他们的模型的扩展,带有预先培训的ELMO表示,已经达到了最先进的性能。添加用我们的学习模型去噪的远程数据比这个基本模型有更大的性能提升,比用原始远程数据或启发式去噪的远程数据训练的模型更好。
URL
https://arxiv.org/abs/1905.01566