Abstract
Streaming Automatic Speech Recognition (ASR) in voice assistants can utilize prefetching to partially hide the latency of response generation. Prefetching involves passing a preliminary ASR hypothesis to downstream systems in order to prefetch and cache a response. If the final ASR hypothesis after endpoint detection matches the preliminary one, the cached response can be delivered to the user, thus saving latency. In this paper, we extend this idea by introducing predictive automatic speech recognition, where we predict the full utterance from a partially observed utterance, and prefetch the response based on the predicted utterance. We introduce two personalization approaches and investigate the tradeoff between potential latency gains from successful predictions and the cost increase from failed predictions. We evaluate our methods on an internal voice assistant dataset as well as the public SLURP dataset.
Abstract (translated)
在语音助手中,流式自动语音识别(ASR)可以利用预缓存来部分隐藏响应生成延迟。预缓存涉及将初步的ASR假设传递给下游系统,以预缓存并缓存一个响应。如果Endpoint检测后的最终的ASR假设与初步的假设匹配,缓存中的响应可以传递给用户,从而节省延迟。在本文中,我们扩展了这一想法,引入了预测式自动语音识别,其中我们从部分观测的utterance中预测整个话述,并基于预测话述预缓存响应。我们引入了两个个性化方法,并研究成功预测带来的潜在延迟收益与失败预测带来的成本增加之间的权衡。我们评估了我们的方法在一个内部语音助手数据集和一个公共SLRP数据集上的性能。
URL
https://arxiv.org/abs/2305.13794