Abstract
Neural Machine Translation (NMT) systems rely on large amounts of parallel data. This is a major challenge for low-resource languages. Building on recent work on unsupervised and semi-supervised methods, we present an approach that combines zero-shot and dual learning. The latter relies on reinforcement learning, to exploit the duality of the machine translation task, and requires only monolingual data for the target language pair. Experiments show that a zero-shot dual system, trained on English-French and English-Spanish, outperforms by large margins a standard NMT system in zero-shot translation performance on Spanish-French (both directions). The zero-shot dual method approaches the performance, within 2.2 BLEU points, of a comparable supervised setting. Our method can obtain improvements also on the setting where a small amount of parallel data for the zero-shot language pair is available. Adding Russian, to extend our experiments to jointly modeling 6 zero-shot translation directions, all directions improve between 4 and 15 BLEU points, again, reaching performance near that of the supervised setting.
Abstract (translated)
神经机器翻译(NMT)系统依赖大量的并行数据。这是低资源语言面临的主要挑战。基于最近在无监督和半监督方法方面的工作,我们提出了一种结合零点和双元学习的方法。后者依赖于强化学习,以利用机器翻译任务的二重性,并且仅需要目标语言对的单一语言数据。实验表明,采用英法和英西两种语言进行训练的零炮双系统在西班牙语 - 法语(双向)的零炮翻译性能方面的表现优于标准NMT系统。零点对偶方法在2.2 BLEU点内接近具有可比较监督设置的性能。我们的方法也可以在零点语言对的少量并行数据可用的情况下获得改进。加上俄语,为了将我们的实验扩展到联合建模6个零点平移方向,所有方向都在4到15个BLEU点之间改善,再次达到接近监督设置的性能。
URL
https://arxiv.org/abs/1805.10338