Abstract
Multilingual pretrained language models (mPLMs) have shown their effectiveness in multilingual word alignment induction. However, these methods usually start from mBERT or XLM-R. In this paper, we investigate whether multilingual sentence Transformer LaBSE is a strong multilingual word aligner. This idea is non-trivial as LaBSE is trained to learn language-agnostic sentence-level embeddings, while the alignment extraction task requires the more fine-grained word-level embeddings to be language-agnostic. We demonstrate that the vanilla LaBSE outperforms other mPLMs currently used in the alignment task, and then propose to finetune LaBSE on parallel corpus for further improvement. Experiment results on seven language pairs show that our best aligner outperforms previous state-of-the-art models of all varieties. In addition, our aligner supports different language pairs in a single model, and even achieves new state-of-the-art on zero-shot language pairs that does not appear in the finetuning process.
Abstract (translated)
多语言预训练语言模型(mPLMs)在多语言词对齐增强方面已经显示了其有效性。然而,这些方法通常从mBERT或XLM-R开始。在本文中,我们探讨了是否存在多语言句子Transformer LaBSE是一个强大的多语言词对齐器的问题。这个主意相当重要,因为LaBSE的训练目标是学习语言无关的句子级嵌入,而对齐提取任务需要更精细的单词级嵌入必须是语言无关的。我们证明了零样本语言对LaBSE Vanilla比当前在对齐任务中使用的其他mPLM表现更好,然后提出了在并行语料库上微调LaBSE以进一步提高的方法。实验结果针对七个语言对显示,我们的最好对齐器比所有各种类的先前最先进的模型表现更好。此外,我们的对齐器在一个模型中支持不同的语言对,甚至实现了在在微调过程中未出现的零样本语言对的新前沿。
URL
https://arxiv.org/abs/2301.12140