Abstract
Large-scale pre-trained language models (PLMs) with powerful language modeling capabilities have been widely used in natural language processing. For automatic speech recognition (ASR), leveraging PLMs to improve performance has also become a promising research trend. However, most previous works may suffer from the inflexible sizes and structures of PLMs, along with the insufficient utilization of the knowledge in PLMs. To alleviate these problems, we propose the hierarchical knowledge distillation on the continuous integrate-and-fire (CIF) based ASR models. Specifically, we distill the knowledge from PLMs to the ASR model by applying cross-modal distillation with contrastive loss at the acoustic level and applying distillation with regression loss at the linguistic level. On the AISHELL-1 dataset, our method achieves 15% relative error rate reduction over the original CIF-based model and achieves comparable performance (3.8%/4.1% on dev/test) to the state-of-the-art model.
Abstract (translated)
大规模的预训练语言模型(PLM)拥有强大的语言建模能力,在自然语言处理中广泛应用。对于自动语音识别(ASR),利用PLM来提高性能已成为一个有前途的研究趋势。然而,大多数先前的工作可能会受到PLM的僵化大小和结构,以及PLM中知识的不足利用的问题。为了缓解这些问题,我们提出了基于连续集成和 fire(CIF)的ASR模型的层级知识蒸馏方法。具体来说,我们通过在声学层面上应用跨模态蒸馏和对比损失,以及在语言层面上应用回归损失,将PLM中的知识蒸馏到ASR模型中。在AIShell-1数据集上,我们的方法比原始CIF-based模型实现了15%相对误差率减少,达到与先进技术模型的可比性能(dev/test分别为3.8%/4.1%)。
URL
https://arxiv.org/abs/2301.13003