Abstract
Neural Machine Translation (NMT) in low-resource settings and of morphologically rich languages is made difficult in part by data sparsity of vocabulary words. Several methods have been used to help reduce this sparsity, notably Byte-Pair Encoding (BPE) and a character-based CNN layer (charCNN). However, the charCNN has largely been neglected, possibly because it has only been compared to BPE rather than combined with it. We argue for a reconsideration of the charCNN, based on cross-lingual improvements on low-resource data. We translate from 8 languages into English, using a multi-way parallel collection of TED transcripts. We find that in most cases, using both BPE and a charCNN performs best, while in Hebrew, using a charCNN over words is best.
Abstract (translated)
在低资源环境和形态丰富的语言中的神经机器翻译(NMT)部分地由于词汇单词的数据稀疏性而变得困难。已经使用了几种方法来帮助减少这种稀疏性,特别是字节对编码(BPE)和基于字符的CNN层(charCNN)。然而,charCNN在很大程度上被忽视了,可能是因为它只与BPE进行了比较而不是与BPE相结合。我们主张根据对低资源数据的跨语言改进重新考虑charCNN。我们使用TED成绩单的多向并行集合将8种语言翻译成英语。我们发现在大多数情况下,同时使用BPE和charCNN表现最佳,而在希伯来语中,使用charCNN优于单词是最好的。
URL
https://arxiv.org/abs/1809.01301