Abstract
Large language models (LLMs) with billions of parameters and pretrained on massive amounts of data are now capable of near or better than state-of-the-art performance in a variety of downstream natural language processing tasks. Neural machine translation (NMT) is one such task that LLMs have been applied to with great success. However, little research has focused on applying LLMs to the more difficult subset of NMT called simultaneous translation (SimulMT), where translation begins before the entire source context is available to the model. In this paper, we address key challenges facing LLMs fine-tuned for SimulMT, validate classical SimulMT concepts and practices in the context of LLMs, explore adapting LLMs that are fine-tuned for NMT to the task of SimulMT, and introduce Simul-LLM, the first open-source fine-tuning and evaluation pipeline development framework for LLMs focused on SimulMT.
Abstract (translated)
大型语言模型(LLMs)具有数十亿个参数,并在大量数据上预训练,现在在各种下游自然语言处理任务中可以实现接近或与最先进水平相当或更好的性能。将LLMs应用于神经机器翻译(NMT)等任务取得了很多成功。然而,很少有研究关注将LLMs应用于更困难的神经机器翻译(SimulMT)子任务,即在模型可以处理全部源上下文之前开始翻译。在本文中,我们研究了为SimulMT对LLMs进行微调的关键挑战,验证了在LLMs的背景下评估古典SimulMT概念和实践,探索将微调的LLMs应用于SimulMT任务,并引入了Simul-LLM,第一个针对SimulMT的开放源代码微调与评估框架。
URL
https://arxiv.org/abs/2312.04691