Abstract
This paper introduces AFRIDOC-MT, a document-level multi-parallel translation dataset covering English and five African languages: Amharic, Hausa, Swahili, Yorùbá, and Zulu. The dataset comprises 334 health and 271 information technology news documents, all human-translated from English to these languages. We conduct document-level translation benchmark experiments by evaluating neural machine translation (NMT) models and large language models (LLMs) for translations between English and these languages, at both the sentence and pseudo-document levels. These outputs are realigned to form complete documents for evaluation. Our results indicate that NLLB-200 achieved the best average performance among the standard NMT models, while GPT-4o outperformed general-purpose LLMs. Fine-tuning selected models led to substantial performance gains, but models trained on sentences struggled to generalize effectively to longer documents. Furthermore, our analysis reveals that some LLMs exhibit issues such as under-generation, repetition of words or phrases, and off-target translations, especially for African languages.
Abstract (translated)
这篇论文介绍了AFRIDOC-MT,这是一个涵盖英语和五种非洲语言(阿姆哈拉语、豪萨语、斯瓦希里语、约鲁巴语和祖鲁语)的文档级多平行翻译数据集。该数据集包括334份健康类文档和271份信息技术新闻文档,所有这些文档均由人工从英语翻译成上述五种非洲语言。 我们在句子级别和伪文档级别上对神经机器翻译(NMT)模型和大型语言模型(LLM)进行了文档级翻译基准实验,并将输出重新排列以形成完整的文档进行评估。我们的结果显示,在标准的NMT模型中,NLLB-200取得了最佳平均性能,而GPT-4o在通用型LLM中表现更优。对选定模型进行微调可以显著提高性能,但基于句子训练的模型难以有效地泛化到较长文档。 此外,我们的分析还发现了一些LLM存在的问题,例如某些语言生成不足、重复使用单词或短语以及翻译偏离主题的问题,特别是在处理非洲语言时更为明显。
URL
https://arxiv.org/abs/2501.06374