Abstract
We explore the impact of multi-source input strategies on machine translation (MT) quality, comparing GPT-4o, a large language model (LLM), with a traditional multilingual neural machine translation (NMT) system. Using intermediate language translations as contextual cues, we evaluate their effectiveness in enhancing English and Chinese translations into Portuguese. Results suggest that contextual information significantly improves translation quality for domain-specific datasets and potentially for linguistically distant language pairs, with diminishing returns observed in benchmarks with high linguistic variability. Additionally, we demonstrate that shallow fusion, a multi-source approach we apply within the NMT system, shows improved results when using high-resource languages as context for other translation pairs, highlighting the importance of strategic context language selection.
Abstract (translated)
我们探讨了多源输入策略对机器翻译(MT)质量的影响,比较了一个大型语言模型(LLM)GPT-4o与传统的多语种神经机器翻译(NMT)系统。通过使用中间语言的翻译作为上下文线索,我们评估了这种方法在增强英语和汉语到葡萄牙语翻译效果中的有效性。结果显示,上下文信息显著提高了领域特定数据集的翻译质量,并且对于语言差异较大的语言对也具有潜在的好处,在语言变异性较高的基准测试中则表现出收益递减的现象。此外,我们还展示了浅层融合方法(我们在NMT系统内应用的一种多源策略)在使用高资源语言作为其他翻译任务上下文时效果更佳,强调了战略性选择上下文语言的重要性。
URL
https://arxiv.org/abs/2503.07195