Abstract
In the current machine translation (MT) landscape, the Transformer architecture stands out as the gold standard, especially for high-resource language pairs. This research delves into its efficacy for low-resource language pairs including both the English$\leftrightarrow$Irish and English$\leftrightarrow$Marathi language pairs. Notably, the study identifies the optimal hyperparameters and subword model type to significantly improve the translation quality of Transformer models for low-resource language pairs. The scarcity of parallel datasets for low-resource languages can hinder MT development. To address this, gaHealth was developed, the first bilingual corpus of health data for the Irish language. Focusing on the health domain, models developed using this in-domain dataset exhibited very significant improvements in BLEU score when compared with models from the LoResMT2021 Shared Task. A subsequent human evaluation using the multidimensional quality metrics error taxonomy showcased the superior performance of the Transformer system in reducing both accuracy and fluency errors compared to an RNN-based counterpart. Furthermore, this thesis introduces adaptNMT and adaptMLLM, two open-source applications streamlined for the development, fine-tuning, and deployment of neural machine translation models. These tools considerably simplify the setup and evaluation process, making MT more accessible to both developers and translators. Notably, adaptNMT, grounded in the OpenNMT ecosystem, promotes eco-friendly natural language processing research by highlighting the environmental footprint of model development. Fine-tuning of MLLMs by adaptMLLM demonstrated advancements in translation performance for two low-resource language pairs: English$\leftrightarrow$Irish and English$\leftrightarrow$Marathi, compared to baselines from the LoResMT2021 Shared Task.
Abstract (translated)
在当前的机器翻译(MT)格局中,Transformer架构脱颖而出,尤其是在高资源语言对中。这项研究深入探讨了其在低资源语言对(包括英语对爱尔兰语和英语对马哈拉蒂语)中的有效性。值得注意的是,该研究发现了对低资源语言对Transformer模型的翻译质量显著改善的最佳超参数和子词模型类型。低资源语言对缺乏并行数据集可能会阻碍MT的发展。为了解决这个问题,gaHealth应运而生,成为第一个针对爱尔兰语的健康数据集的双语语料库。聚焦于健康领域,使用该域数据集训练的模型在BLEU得分方面与来自LoResMT2021共享任务的模型相比具有非常显著的改善。接下来,通过多维质量指标错误分类器的人类评估展示了Transformer系统在减少准确性和流畅性误差方面优越性能。此外,本论文还引入了adaptNMT和adaptMLLM这两个开源应用,专为神经机器翻译模型的开发、微调和支持部署而优化。这些工具大大简化了设置和评估过程,使MT对开发者和翻译者来说更加易用。值得注意的是,基于OpenNMT的adaptNMT通过突出模型开发的环保足迹推动了可持续自然语言处理研究。通过adaptMLLM对MLLM的微调展示了对于两个低资源语言对(英语对爱尔兰语和英语对马哈拉蒂语)的翻译性能的改善,与LoResMT2021共享任务的基线相比。
URL
https://arxiv.org/abs/2403.01580