Abstract
Syntax has been demonstrated highly effective in neural machine translation (NMT). Previous NMT models integrate syntax by representing 1-best tree outputs from a well-trained parsing system, e.g., the representative Tree-RNN and Tree-Linearization methods, which may suffer from error propagation. In this work, we propose a novel method to integrate source-side syntax implicitly for NMT. The basic idea is to use the intermediate hidden representations of a well-trained end-to-end dependency parser, which are referred to as syntax-aware word representations (SAWRs). Then, we simply concatenate such SAWRs with ordinary word embeddings to enhance basic NMT models. The method can be straightforwardly integrated into the widely-used sequence-to-sequence (Seq2Seq) NMT models. We start with a representative RNN-based Seq2Seq baseline system, and test the effectiveness of our proposed method on two benchmark datasets of the Chinese-English and English-Vietnamese translation tasks, respectively. Experimental results show that the proposed approach is able to bring significant BLEU score improvements on the two datasets compared with the baseline, 1.74 points for Chinese-English translation and 0.80 point for English-Vietnamese translation, respectively. In addition, the approach also outperforms the explicit Tree-RNN and Tree-Linearization methods.
Abstract (translated)
在神经机器翻译(NMT)中,语法已被证明是非常有效的。以前的NMT模型通过表示来自训练有素的解析系统的1个最佳树输出来集成语法,例如,具有代表性的树RNN和树线性化方法,这些方法可能会受到错误传播的影响。在本文中,我们提出了一种新的方法来隐式地集成NMT的源端语法。基本思想是使用经过良好训练的端到端依赖性解析器的中间隐藏表示,这被称为语法感知的单词表示(SAWR)。然后,我们简单地将这些sawr与普通字嵌入连接起来,以增强基本的NMT模型。该方法可以直接集成到广泛使用的序列-序列(seq2seq)NMT模型中。我们从一个代表性的基于RNN的seq2seq基线系统入手,分别测试了我们提出的方法在中英、英越翻译任务的两个基准数据集上的有效性。实验结果表明,该方法能显著提高两个数据集的BLeu分数,其中汉译1.74分,英越译0.80分。此外,该方法还优于显式树RNN和树线性化方法。
URL
https://arxiv.org/abs/1905.02878