Abstract
Polyphone disambiguation is the most crucial task in Mandarin grapheme-to-phoneme (g2p) conversion. Previous studies have benefited from this problem because of pre-trained language models, restricted output, and extra information from Part-Of-Speech (POS) tagging. Inspired by the strategies, we proposed a novel approach, called g2pW, which adapts learnable softmax-weights to condition the outputs of BERT with the polyphonic character of interest and its POS tagging. Rather than using the hard mask as in previous works, our experiments showed that learning a soft-weighting function for the candidate phonemes benefits performance. Besides, our g2pW does not require extra pre-trained POS tagging models while using POS tags as auxiliary features since we train the POS tagging model simultaneously with the unified encoder. The experiments show that our g2pW outperforms existing methods on the public dataset. All codes, model weights, and a user-friendly package are publicly available.
Abstract (translated)
URL
https://arxiv.org/abs/2203.10430