Abstract
Character-level Neural Machine Translation (NMT) models have recently achieved impressive results on many language pairs. They mainly do well for Indo-European language pairs, where the languages share the same writing system. However, for translating between Chinese and English, the gap between the two different writing systems poses a major challenge because of a lack of systematic correspondence between the individual linguistic units. In this paper, we enable character-level NMT for Chinese, by breaking down Chinese characters into linguistic units similar to that of Indo-European languages. We use the Wubi encoding scheme, which preserves the original shape and semantic information of the characters, while also being reversible. We show promising results from training Wubi-based models on the character- and subword-level with recurrent as well as convolutional models.
Abstract (translated)
角色级神经机器翻译(NMT)模型最近在许多语言对上取得了令人瞩目的成果。它们主要用于印欧语言对,其中语言共享相同的书写系统。然而,对于中英文翻译,两种不同书写系统之间的差距构成了一个重大挑战,因为各个语言单位之间缺乏系统的对应关系。在本文中,我们通过将汉字分解为类似于印欧语言的语言单元,为中文启用字符级NMT。我们使用Wubi编码方案,它保留了角色的原始形状和语义信息,同时也是可逆的。我们通过循环和卷积模型在字符和子字级上训练基于Wubi的模型,显示出有希望的结果。
URL
https://arxiv.org/abs/1805.03330