Abstract
Leveraging large language models for machine translation has demonstrated promising results. However, it does require the large language models to possess the capability of handling both the source and target languages in machine translation. When it is challenging to find large models that support the desired languages, resorting to continuous learning methods becomes a costly endeavor. To mitigate these expenses, we propose an innovative approach called RD (Relay Decoding), which entails concatenating two distinct large models that individually support the source and target languages. By incorporating a simple mapping layer to facilitate the connection between these two models and utilizing a limited amount of parallel data for training, we successfully achieve superior results in the machine translation task. Experimental results conducted on the Multi30k and WikiMatrix datasets validate the effectiveness of our proposed method.
Abstract (translated)
利用大型语言模型进行机器翻译已经取得了很好的结果。然而,这确实需要大型语言模型具有处理机器翻译源语言和目标语言的能力。当很难找到支持所需语言的大模型时,采用连续学习方法会变得代价高昂。为了降低这些费用,我们提出了名为RD(中继解码)的创新方法,该方法涉及将两个独立支持源语言和目标语言的大模型连接起来。通过引入一个简单的映射层来促进这两个模型之间的连接,并使用有限的数据进行训练,我们成功地实现了机器翻译任务的卓越结果。在Multi30k和WikiMatrix数据集上进行的实验结果证实了我们所提出方法的有效性。
URL
https://arxiv.org/abs/2405.02933