Abstract
We study the problem of learning a named entity recognition (NER) model using noisy la-bels from multiple weak supervision sources. Though cheaper than human annotators, weak sources usually yield incomplete, inaccurate, or contradictory predictions. To address such challenges, we propose a conditional hidden Markov model (CHMM). It inherits the hidden Markov model's ability to aggregating the labels from weak sources through unsupervised learning. However, CHMM enhances the hidden Markov model's flexibility and context representation capability by predicting token-wise transition and emission probabilities from the BERT embeddings of the input tokens. In addition, we refine CHMM's prediction with an alternate-training approach (CHMM-AlT). It fine-tunes a BERT-based NER model with the labels inferred by CHMM, and this BERT-NER's output is regarded as an additional weak source to train the CHMM in return. Evaluation on four datasets from various domains shows that our method is superior to the weakly super-vised baselines by a wide margin.
Abstract (translated)
URL
https://arxiv.org/abs/2105.12848