Abstract
Self-supervised learning (SSL) based speech pre-training has attracted much attention for its capability of extracting rich representations learned from massive unlabeled data. On the other hand, the use of weakly-supervised data is less explored for speech pre-training. To fill this gap, we propose a weakly-supervised speech pre-training method based on speaker-aware speech data. It adopts a similar training procedure to the widely-used masked speech prediction based SSL framework, while incorporating additional target-speaker enrollment information as an auxiliary input. In this way, the learned representation is steered towards the target speaker even in the presence of highly overlapping interference, allowing potential applications to tasks such as target speech recognition. Our experiments on Libri2Mix and WSJ0-2mix datasets show that the proposed model achieves significantly better ASR performance compared to WavLM, the state-of-the-art SSL model with denoising capability.
Abstract (translated)
基于自监督学习的语音前训练受到了广泛关注,因为它能够从大量未标记数据中学习到丰富的表示。然而,对于语音前训练,使用弱监督数据的探索较少。为了填补这一差距,我们提出了一种基于语音识别者意识的语音前训练方法。该方法采用了与广泛使用的掩码语音预测框架类似的训练流程,同时添加目标语音识别者 enrollment信息作为辅助输入。这样, learned 表示就会被引导向目标语音识别者,即使在存在高度重叠的干扰情况下也是如此,从而允许潜在的应用领域进行目标语音识别等任务。我们在Libri2混合和WSJ0-2混合数据集上的实验表明, proposed model相比具有去噪能力的WavLM,在语音识别性能方面取得了显著更好的表现。
URL
https://arxiv.org/abs/2305.16286