Abstract
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation.
Abstract (translated)
多模态机器翻译是神经机器翻译(NMT)的一个有吸引力的应用。它有助于计算机深入理解视觉对象及其与自然语言的关系。然而,多模NMT系统缺乏可用的训练数据,导致翻译稀有单词的性能较差。在网络管理技术中,预训练的字嵌入技术可以改善低资源域的网络管理技术,并提出了一种基于搜索的方法来解决这种罕见的字问题。在本研究中,我们将这两种方法有效地结合在多模态NMT的背景下,探讨如何充分利用预训练的嵌入词来更好地翻译稀有词。我们报告了1.24 Meteor和2.49 Bleu的整体性能改进,稀有词翻译的F分数提高了7.67。
URL
https://arxiv.org/abs/1904.00639