Abstract
Simultaneous machine translation begins to translate each source sentence before the source speaker is finished speaking, with applications to live and streaming scenarios. Simultaneous systems must carefully schedule their reading of the source sentence to balance quality against latency. We present the first simultaneous translation system to learn an adaptive schedule jointly with a neural machine translation (NMT) model that attends over all source tokens read thus far. We do so by introducing Monotonic Infinite Lookback (MILk) attention, which maintains both a hard, monotonic attention head to schedule the reading of the source sentence, and a soft attention head that extends from the monotonic head back to the beginning of the source. We show that MILk's adaptive schedule allows it to arrive at latency-quality trade-offs that are favorable to those of a recently proposed wait-k strategy for many latency values.
Abstract (translated)
同步机器翻译在源演讲者演讲结束前开始翻译每个源语句,并将应用程序转换为实时和流式场景。同时系统必须仔细安排源语句的读取,以平衡质量和延迟。我们提出了第一个同声翻译系统,以学习一个自适应调度与神经机器翻译(NMT)模型,出席所有的源令牌至今阅读。我们通过引入单调的无限回望(milk)注意力来实现这一点,它既保持了一个难的、单调的注意头来安排源句的阅读,又保持了一个从单调头延伸到源句开头的软注意头。我们表明,牛奶的自适应时间表允许它达到延迟质量权衡,这有利于最近提出的等待k策略的许多延迟值。
URL
https://arxiv.org/abs/1906.05218