Abstract
The recently proposed serialized output training (SOT) simplifies multi-talker automatic speech recognition (ASR) by generating speaker transcriptions separated by a special token. However, frequent speaker changes can make speaker change prediction difficult. To address this, we propose boundary-aware serialized output training (BA-SOT), which explicitly incorporates boundary knowledge into the decoder via a speaker change detection task and boundary constraint loss. We also introduce a two-stage connectionist temporal classification (CTC) strategy that incorporates token-level SOT CTC to restore temporal context information. Besides typical character error rate (CER), we introduce utterance-dependent character error rate (UD-CER) to further measure the precision of speaker change prediction. Compared to original SOT, BA-SOT reduces CER/UD-CER by 5.1%/14.0%, and leveraging a pre-trained ASR model for BA-SOT model initialization further reduces CER/UD-CER by 8.4%/19.9%.
Abstract (translated)
最近提出的离散化输出训练(SOT)简化了多说话人自动语音识别(ASR)的方法,通过生成分离的说话人摘要来简化。然而,频繁的变化可能会导致说话人变化预测困难。为了解决这个问题,我们提出了边界aware的离散化输出训练(BA-SOT),通过说话人变化检测任务和边界约束损失将边界知识引入解码器。我们还引入了一个两阶段的并发时间分类策略,其中包含 token 级别的SOT CTC,以恢复时间上下文信息。除了常见的字符错误率(CER),我们还引入了话动依赖字符错误率(UD-CER),以进一步衡量说话人变化预测的准确性。与原始SOT相比,BA-SOT将CER/UD-CER降低了5.1%/14.0%。此外,利用预先训练的ASR模型进行BA-SOT模型初始化进一步降低了CER/UD-CER的8.4%/19.9%。
URL
https://arxiv.org/abs/2305.13716