Abstract
Neural machine translation model assumes that syntax knowledge can be learned from the bilingual corpus via attention network automatically. However, the attention network trained in weak supervision actually cannot capture the deep structure of the sentence. Naturally, we expect to introduce external syntax knowledge to guide the learning of attention network. Thus, we propose a novel, parameter-free, dependency-scaled self-attention network, which integrate explicit syntactic dependencies into attention network to dispel the dispersion of attention distribution. Finally, two knowledge sparse techniques are proposed to prevent the model from overfitting noisy syntactic dependencies. Experiments and extensive analyses on the IWSLT14 German-to-English and WMT16 German-to-English translation tasks validate the effectiveness of our approach.
Abstract (translated)
URL
https://arxiv.org/abs/2111.11707