Abstract
Training speech separation models in the supervised setting raises a permutation problem: finding the best assignation between the model predictions and the ground truth separated signals. This inherently ambiguous task is customarily solved using Permutation Invariant Training (PIT). In this article, we instead consider using the Multiple Choice Learning (MCL) framework, which was originally introduced to tackle ambiguous tasks. We demonstrate experimentally on the popular WSJ0-mix and LibriMix benchmarks that MCL matches the performances of PIT, while being computationally advantageous. This opens the door to a promising research direction, as MCL can be naturally extended to handle a variable number of speakers, or to tackle speech separation in the unsupervised setting.
Abstract (translated)
在监督设置中训练语音分离模型时会出现一个排列问题:即寻找模型预测与真实分离信号之间的最佳分配。这一固有的模糊任务通常通过排列不变训练(PIT)来解决。本文中,我们考虑使用多选学习(MCL)框架,该框架最初是为了解决模糊任务而引入的。我们在流行的WSJ0-mix和LibriMix基准测试上进行实验,证明了MCL在性能上可以匹敌PIT,同时具有计算上的优势。这开启了很有前景的研究方向,因为MCL自然地可以扩展到处理不同数量说话者的情况,或者解决无监督设置下的语音分离问题。
URL
https://arxiv.org/abs/2411.18497