Paper Reading AI Learner

Syntax-Enhanced Neural Machine Translation with Syntax-Aware Word Representations

2019-05-08 02:56:43
Meishan Zhang, Zhenghua Li, Guohong Fu, Min Zhang

Abstract

Syntax has been demonstrated highly effective in neural machine translation (NMT). Previous NMT models integrate syntax by representing 1-best tree outputs from a well-trained parsing system, e.g., the representative Tree-RNN and Tree-Linearization methods, which may suffer from error propagation. In this work, we propose a novel method to integrate source-side syntax implicitly for NMT. The basic idea is to use the intermediate hidden representations of a well-trained end-to-end dependency parser, which are referred to as syntax-aware word representations (SAWRs). Then, we simply concatenate such SAWRs with ordinary word embeddings to enhance basic NMT models. The method can be straightforwardly integrated into the widely-used sequence-to-sequence (Seq2Seq) NMT models. We start with a representative RNN-based Seq2Seq baseline system, and test the effectiveness of our proposed method on two benchmark datasets of the Chinese-English and English-Vietnamese translation tasks, respectively. Experimental results show that the proposed approach is able to bring significant BLEU score improvements on the two datasets compared with the baseline, 1.74 points for Chinese-English translation and 0.80 point for English-Vietnamese translation, respectively. In addition, the approach also outperforms the explicit Tree-RNN and Tree-Linearization methods.

Abstract (translated)

在神经机器翻译(NMT)中,语法已被证明是非常有效的。以前的NMT模型通过表示来自训练有素的解析系统的1个最佳树输出来集成语法,例如,具有代表性的树RNN和树线性化方法,这些方法可能会受到错误传播的影响。在本文中,我们提出了一种新的方法来隐式地集成NMT的源端语法。基本思想是使用经过良好训练的端到端依赖性解析器的中间隐藏表示,这被称为语法感知的单词表示(SAWR)。然后,我们简单地将这些sawr与普通字嵌入连接起来,以增强基本的NMT模型。该方法可以直接集成到广泛使用的序列-序列(seq2seq)NMT模型中。我们从一个代表性的基于RNN的seq2seq基线系统入手,分别测试了我们提出的方法在中英、英越翻译任务的两个基准数据集上的有效性。实验结果表明,该方法能显著提高两个数据集的BLeu分数,其中汉译1.74分,英越译0.80分。此外,该方法还优于显式树RNN和树线性化方法。

URL

https://arxiv.org/abs/1905.02878

PDF

https://arxiv.org/pdf/1905.02878.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot