Abstract
This paper demonstrates that Phrase-Based Statistical Machine Translation (PBSMT) can outperform Transformer-based Neural Machine Translation (NMT) in moderate-resource scenarios, specifically for structurally similar languages, like the Persian-Hindi pair. Despite the Transformer architecture's typical preference for large parallel corpora, our results show that PBSMT achieves a BLEU score of 66.32, significantly exceeding the Transformer-NMT score of 53.7 on the same dataset. Additionally, we explore variations of the SMT architecture, including training on Romanized text and modifying the word order of Persian sentences to match the left-to-right (LTR) structure of Hindi. Our findings highlight the importance of choosing the right architecture based on language pair characteristics and advocate for SMT as a high-performing alternative, even in contexts commonly dominated by NMT.
Abstract (translated)
本文证明,在中等资源场景下,基于短语的统计机器翻译(PBSMT)可以在结构相似的语言对之间超越基于Transformer的神经机器翻译(NMT),如波斯语-印地语配对。尽管Transformer架构通常更倾向于大规模平行语料库,但我们的结果显示,PBSMT在相同数据集上达到了66.32的BLEU分数,显著超过了Transformer-NMT的53.7分。此外,我们还探索了SMT架构的各种变体,包括使用罗马化文本进行训练以及调整波斯语句子的词序以匹配印地语从左到右(LTR)的结构。我们的研究结果强调了根据语言对特征选择合适架构的重要性,并倡导将SMT作为一种高性能替代方案,即使在通常由NMT主导的情况下也是如此。
URL
https://arxiv.org/abs/2412.16877