Paper Reading AI Learner

Knowledge Distillation from Multiple Foundation Models for End-to-End Speech Recognition

2023-03-20 07:18:18
Xiaoyu Yang, Qiujia Li, Chao Zhang, Philip C. Woodland

Abstract

Although large foundation models pre-trained by self-supervised learning have achieved state-of-the-art performance in many tasks including automatic speech recognition (ASR), knowledge distillation (KD) is often required in practice to transfer the knowledge learned by large teacher models into much smaller student models with affordable computation and memory costs. This paper proposes a novel two-stage KD framework to distil the knowledge from multiple speech foundation models as teachers into a single student neural transducer model for ASR. In the first stage, the student model encoder is pre-trained using the embeddings extracted from multiple teacher models. In the second stage, the student encoder is fine-tuned with the audio-text pairs based on the ASR task. Experiments on the LibriSpeech 100-hour subset show that the proposed KD framework improves the performance of both streaming and non-streaming student models when using only one teacher. The performance of the student model can be further enhanced when multiple teachers are used jointly, achieving word error rate reductions (WERRs) of 17.5% and 10.6%. Our proposed framework can be combined with other existing KD methods to achieve further improvements. Further WERRs were obtained by incorporating extra unlabelled data during encoder pre-training, leading to a total relative WERR of 55.0% on the non-streaming student model.

Abstract (translated)

尽管采用自监督学习训练的大型基础模型已经在许多任务中达到了最先进的表现,包括自动语音识别(ASR),但知识蒸馏(KD)在实践中经常是必要的,可以将大型教师模型学到的知识转移到相对较小的学生模型,使其计算和存储成本 affordable。本文提出了一种新的两阶段KD框架,将多个语音基础模型的知识作为教师从多个教师模型中提取 embeddings,然后将学生编码器训练为基于ASR任务的单个学生神经网络转换器。在第一阶段,学生编码器使用从多个教师模型中提取的嵌入s进行预训练。在第二阶段,学生编码器与基于ASR任务的音频文本对进行微调。在LibriSpeech 100小时子集的实验中,结果表明,仅使用一名教师时,该 proposed KD框架可以改善流和非流学生模型的性能。当多个教师同时使用时,学生模型的性能可以进一步增强,实现单词错误率降低(WERR)17.5%和10.6%。我们的 proposed 框架可以与其他现有的KD方法相结合,以实现进一步的改进。在编码器的预训练过程中,额外的未标记数据可以添加,从而在非流学生模型上实现总共的WERR降低到55.0%。

URL

https://arxiv.org/abs/2303.10917

PDF

https://arxiv.org/pdf/2303.10917.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot