Paper Reading AI Learner

Advanced Long-Content Speech Recognition With Factorized Neural Transducer

2024-03-20 09:09:49
Xun Gong, Yu Wu, Jinyu Li, Shujie Liu, Rui Zhao, Xie Chen, Yanmin Qian

Abstract

In this paper, we propose two novel approaches, which integrate long-content information into the factorized neural transducer (FNT) based architecture in both non-streaming (referred to as LongFNT ) and streaming (referred to as SLongFNT ) scenarios. We first investigate whether long-content transcriptions can improve the vanilla conformer transducer (C-T) models. Our experiments indicate that the vanilla C-T models do not exhibit improved performance when utilizing long-content transcriptions, possibly due to the predictor network of C-T models not functioning as a pure language model. Instead, FNT shows its potential in utilizing long-content information, where we propose the LongFNT model and explore the impact of long-content information in both text (LongFNT-Text) and speech (LongFNT-Speech). The proposed LongFNT-Text and LongFNT-Speech models further complement each other to achieve better performance, with transcription history proving more valuable to the model. The effectiveness of our LongFNT approach is evaluated on LibriSpeech and GigaSpeech corpora, and obtains relative 19% and 12% word error rate reduction, respectively. Furthermore, we extend the LongFNT model to the streaming scenario, which is named SLongFNT , consisting of SLongFNT-Text and SLongFNT-Speech approaches to utilize long-content text and speech information. Experiments show that the proposed SLongFNT model achieves relative 26% and 17% WER reduction on LibriSpeech and GigaSpeech respectively while keeping a good latency, compared to the FNT baseline. Overall, our proposed LongFNT and SLongFNT highlight the significance of considering long-content speech and transcription knowledge for improving both non-streaming and streaming speech recognition systems.

Abstract (translated)

在本文中,我们提出了两种新方法,将长内容信息整合到分解神经转换器(FNT)的架构中,基于非流式(称为LongFNT)和流式(称为SLongFNT)场景。我们首先调查是否长内容转录可以提高普通 conformer 转换器(C-T)模型。我们的实验结果表明,当使用长内容转录时,普通 C-T 模型并没有表现出更好的性能,这可能是因为 C-T 模型的预测网络并不是一个纯语言模型。相反,FNT展示了利用长内容信息的潜力,我们提出了 LongFNT 模型,并探讨了长内容信息在文本(LongFNT-Text)和语音(LongFNT-Speech)场景下的影响。所提出的 LongFNT-Text 和 LongFNT-Speech 模型进一步互补彼此以实现更好的性能,转录历史对模型来说具有更高的价值。我们对 LongFNT 方法的 effectiveness在 LibriSpeech 和 GigaSpeech 数据集上进行了评估,相对19%和12%的单词错误率降低。此外,我们将 LongFNT 模型扩展到流式场景,名为 SLongFNT,包括 SLongFNT-Text 和 SLongFNT-Speech 方法,用于利用长内容文本和语音信息。实验结果表明,相对于 FNT 基线,所提出的 SLongFNT 模型在 LibriSpeech 和 GigaSpeech 数据集上分别实现了相对26%和17%的WER降低,同时保持较快的延迟。总体而言,我们提出的 LongFNT 和 SLongFNT 方法突出了考虑长内容语音和转录知识对于改进非流式和流式语音识别系统的重要性。

URL

https://arxiv.org/abs/2403.13423

PDF

https://arxiv.org/pdf/2403.13423.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot