Abstract
The remarkable success of transformers in the field of natural language processing has sparked the interest of the speech-processing community, leading to an exploration of their potential for modeling long-range dependencies within speech sequences. Recently, transformers have gained prominence across various speech-related domains, including automatic speech recognition, speech synthesis, speech translation, speech para-linguistics, speech enhancement, spoken dialogue systems, and numerous multimodal applications. In this paper, we present a comprehensive survey that aims to bridge research studies from diverse subfields within speech technology. By consolidating findings from across the speech technology landscape, we provide a valuable resource for researchers interested in harnessing the power of transformers to advance the field. We identify the challenges encountered by transformers in speech processing while also offering insights into potential solutions to address these issues.
Abstract (translated)
Transformer在自然语言处理领域的出色表现引起了语音处理社区的兴趣,导致对它们在语音序列中建模长期依赖的潜力的深入研究。最近,Transformer在多个语音相关领域中脱颖而出,包括自动语音识别、语音合成、语音转换、语音非语言处理、语音增强、口语对话系统和许多多媒件应用程序。在本文中,我们提出一项全面调查,旨在连接语音技术的不同子领域的研究成果。通过整合来自语音技术景观中各个发现,我们为希望利用Transformer的力量推动该领域的研究人员提供了宝贵的资源。我们识别了Transformer在语音处理中面临的挑战,同时提供了解决这些问题的潜在解决方案的重要见解。
URL
https://arxiv.org/abs/2303.11607