Abstract
In this paper, we present a novel approach for text independent phone-to-audio alignment based on phoneme recognition, representation learning and knowledge transfer. Our method leverages a self-supervised model (wav2vec2) fine-tuned for phoneme recognition using a Connectionist Temporal Classification (CTC) loss, a dimension reduction model and a frame-level phoneme classifier trained thanks to forced-alignment labels (using Montreal Forced Aligner) to produce multi-lingual phonetic representations, thus requiring minimal additional training. We evaluate our model using synthetic native data from the TIMIT dataset and the SCRIBE dataset for American and British English, respectively. Our proposed model outperforms the state-of-the-art (charsiu) in statistical metrics and has applications in language learning and speech processing systems. We leave experiments on other languages for future work but the design of the system makes it easily adaptable to other languages.
Abstract (translated)
在本文中,我们提出了一种基于语音识别、表示学习和知识传递的新方法来实现文本独立电话到音频对齐。我们的方法利用了一个自我监督模型(wav2vec2)经过CTC损失、维度减少模型和通过强制对齐标签(使用蒙特利尔强制对齐器)训练来进行语音识别,从而生成多语言语音表示,这使得我们无需额外训练。我们使用来自TIMIT数据集和SCRIBE数据集的合成本地数据来评估我们的模型。我们提出的方法在统计指标上超过了最先进的(charsiu)模型,并在语言学习和语音处理系统中具有应用。我们将继续进行其他语言的实验,但系统的设计使其容易适应其他语言。
URL
https://arxiv.org/abs/2405.02124