Abstract
Proto-form reconstruction has been a painstaking process for linguists. Recently, computational models such as RNN and Transformers have been proposed to automate this process. We take three different approaches to improve upon previous methods, including data augmentation to recover missing reflexes, adding a VAE structure to the Transformer model for proto-to-language prediction, and using a neural machine translation model for the reconstruction task. We find that with the additional VAE structure, the Transformer model has a better performance on the WikiHan dataset, and the data augmentation step stabilizes the training.
Abstract (translated)
原型重建一直是语言学家们痛苦的过程。最近,提出了使用诸如RNN和Transformer这样的计算模型来自动化这一过程。我们采用了三种不同的方法来改进以前的方法,包括数据增强来恢复缺失的反射,在Transformer模型中添加VAE结构来进行原型到语言预测,以及使用神经机器翻译模型来进行重构任务。我们发现,在添加了VAE结构之后,Transformer模型的WikiHan数据集的表现更好,数据增强步骤使训练趋于稳定。
URL
https://arxiv.org/abs/2404.15690