Abstract
Using pre-trained word embeddings as input layer is a common practice in many natural language processing (NLP) tasks, but it is largely neglected for neural machine translation (NMT). In this paper, we conducted a systematic analysis on the effect of using pre-trained source-side monolingual word embedding in NMT. We compared several strategies, such as fixing or updating the embeddings during NMT training on varying amounts of data, and we also proposed a novel strategy called dual-embedding that blends the fixing and updating strategies. Our results suggest that pre-trained embeddings can be helpful if properly incorporated into NMT, especially when parallel data is limited or additional in-domain monolingual data is readily available.
Abstract (translated)
使用预先训练的词嵌入作为输入层是许多自然语言处理(NLP)任务中的惯例,但神经机器翻译(NMT)在很大程度上被忽略。在本文中,我们对在NMT中使用预先训练的源侧单语词嵌入的效果进行了系统分析。我们比较了几种策略,比如修改或更新NMT训练期间对不同数据量的嵌入,并且我们还提出了一种新的策略,称为双重嵌入,它融合了修复策略和更新策略。我们的研究结果表明,预先训练好的嵌入可以很好地适用于NMT,特别是当并行数据有限或额外的域内单语数据可用时。
URL
https://arxiv.org/abs/1806.01515