Abstract
In this paper, we investigate whether multilingual neural translation models learn a stronger semantic abstraction of sentences than bilingual ones. We test this hypotheses by measuring the perplexity of such models when applied to paraphrases of the source language. The intuition is that an encoder produces better representations if a decoder is capable of recognizing synonymous sentences in the same language even though the model is never trained for that task. In our setup, we add 16 different auxiliary languages to a bidirectional bilingual baseline model (English-French) and test it with in-domain and out-of-domain paraphrases in English. The results show that the perplexity is significantly reduced in each of the cases, indicating that meaning can be grounded in translation. This is further supported by a study on paraphrase generation that we also include at the end of the paper.
Abstract (translated)
在本文中,我们研究了多语言神经翻译模型是否比双语神经翻译模型学习更强的句子语义抽象。我们通过测量这些模型在应用于源语言的释义时的困惑来测试这个假设。直觉是如果解码器能够识别同一语言中的同义句,则编码器产生更好的表示,即使该模型从未针对该任务进行训练。在我们的设置中,我们将16种不同的辅助语言添加到双向双语基线模型(英语 - 法语)中,并使用英语中的域内和域外释义进行测试。结果表明,在每种情况下,困惑都显着减少,表明意义可以建立在翻译的基础上。关于复述生成的研究进一步支持了这一点,我们在本文末尾也包括这一研究。
URL
https://arxiv.org/abs/1808.06826