Abstract
Large language models (LLMs) have shown surprisingly good performance in multilingual neural machine translation (MNMT) even when trained without parallel data. Yet, despite the fact that the amount of training data is gigantic, they still struggle with translating rare words, particularly for low-resource languages. Even worse, it is usually unrealistic to retrieve relevant demonstrations for in-context learning with low-resource languages on LLMs, which restricts the practical use of LLMs for translation -- how should we mitigate this problem? To this end, we present a novel method, CoD, which augments LLMs with prior knowledge with the chains of multilingual dictionaries for a subset of input words to elicit translation abilities for LLMs. Extensive experiments indicate that augmenting ChatGPT with CoD elicits large gains by up to 13x ChrF++ points for MNMT (3.08 to 42.63 for English to Serbian written in Cyrillic script) on FLORES-200 full devtest set. We further demonstrate the importance of chaining the multilingual dictionaries, as well as the superiority of CoD to few-shot demonstration for low-resource languages.
Abstract (translated)
大型语言模型(LLMs)在多语言神经网络机器翻译(MNMT)方面表现出令人惊讶地良好性能,即使在没有并行训练数据的情况下也是如此。然而,尽管训练数据量非常大,但它们仍然难以翻译罕见的单词,特别是对于资源有限的语言。更严重的是,通常难以在LLMs中获取与低资源语言上下文学习相关的相关演示,这限制了LLMs对翻译的实际用途——我们应该如何解决这个问题?为此,我们提出了一种新方法,称为CoD,它通过增加LLMs先前的知识,使用多语言词典链对输入单词的一小部分进行扩展,以提取LLMs的翻译能力。广泛的实验表明,通过与CoD相结合,ChatGPT在FLORES-200 full devtest set上的MNMT训练中获得了大量的增益,达到了13xChrF++ points(英语到塞尔维亚用西里尔字母书写的得分范围是3.08到42.63)。我们还证明了链式多语言词典的重要性,以及CoD对于低资源语言 few-shot演示的优越性。
URL
https://arxiv.org/abs/2305.06575