Abstract
We present UDify, a multilingual multi-task model capable of accurately predicting universal part-of-speech, morphological features, lemmas, and dependency trees simultaneously for all 124 Universal Dependencies treebanks across 75 languages. By leveraging a multilingual BERT self-attention model pretrained on 104 languages, we found that fine-tuning it on all datasets concatenated together with simple softmax classifiers for each UD task can result in state-of-the-art UPOS, UFeats, Lemmas, UAS, and LAS scores, without requiring any recurrent or language-specific components. We evaluate UDify for multilingual learning, showing that low-resource languages benefit the most from cross-linguistic annotations. We also evaluate for zero-shot learning, with results suggesting that multilingual training provides strong UD predictions even for languages that neither UDify nor BERT have ever been trained on. Code for UDify is available at https://github.com/hyperparticle/udify.
Abstract (translated)
我们提出了udify,一个多语言多任务模型,能够准确地预测所有124个通用依赖树链接的通用部分,形态特征,引理和依赖树同时跨越75种语言。通过利用对104种语言进行预培训的多语言伯特自我关注模型,我们发现在所有数据集上对其进行微调,并将其与每个UD任务的简单SoftMax分类器连接在一起,可以产生最先进的UPO、UFEAT、Lemma、UAS和LAS分数,而不需要任何重复或语言特定的组件。我们评估了多语言学习的udify,表明低资源语言从跨语言注释中获益最大。我们还评估了零镜头学习,结果表明,多语言培训提供了强大的UD预测,即使是对于乌迪fy和伯特从未接受过培训的语言。udify代码可在https://github.com/hyperparticle/udify上找到。
URL
https://arxiv.org/abs/1904.02099