Abstract
Word embedding is central to neural machine translation (NMT), which has attracted intensive research interest in recent years. In NMT, the source embedding plays the role of the entrance while the target embedding acts as the terminal. These layers occupy most of the model parameters for representation learning. Furthermore, they indirectly interface via a soft-attention mechanism, which makes them comparatively isolated. In this paper, we propose shared-private bilingual word embeddings, which give a closer relationship between the source and target embeddings, and which also reduce the number of model parameters. For similar source and target words, their embeddings tend to share a part of the features and they cooperatively learn these common representation units. Experiments on 5 language pairs belonging to 6 different language families and written in 5 different alphabets demonstrate that the proposed model provides a significant performance boost over the strong baselines with dramatically fewer model parameters.
Abstract (translated)
词嵌入是神经机器翻译(NMT)的核心,近年来引起了广泛的研究兴趣。在NMT中,源嵌入扮演着入口的角色,目标嵌入扮演着终端的角色。这些层占据了表示学习的大部分模型参数。此外,它们通过软注意机制间接地进行交互,这使得它们相对孤立。在本文中,我们提出了共享的私有双语单词嵌入,这使得源嵌入和目标嵌入之间的关系更加紧密,同时也减少了模型参数的数量。对于类似的源词和目标词,它们的嵌入倾向于共享部分特性,并且它们合作学习这些公共表示单位。对6个不同语言族的5对语言进行了实验,并用5个不同的字母书写,实验结果表明,该模型在强基线上提供了显著的性能提升,模型参数显著减少。
URL
https://arxiv.org/abs/1906.03100