Abstract
This work proposes the use of clean speech vocoder parameters as the target for a neural network performing speech enhancement. These parameters have been designed for text-to-speech synthesis so that they both produce high-quality resyntheses and also are straightforward to model with neural networks, but have not been utilized in speech enhancement until now. In comparison to a matched text-to-speech system that is given the ground truth transcripts of the noisy speech, our model is able to produce more natural speech because it has access to the true prosody in the noisy speech. In comparison to two denoising systems, the oracle Wiener mask and a DNN-based mask predictor, our model equals the oracle Wiener mask in subjective quality and intelligibility and surpasses the realistic system. A vocoder-based upper bound shows that there is still room for improvement with this approach beyond the oracle Wiener mask. We test speaker-dependence with two speakers and show that a single model can be used for multiple speakers.
Abstract (translated)
本文提出了使用干净的语音声码器参数作为神经网络进行语音增强的目标。这些参数都是为文本到语音合成而设计的,因此它们都能产生高质量的再合成,并且可以直接用神经网络建模,但直到现在还没有被用于语音增强。与给出嘈杂语音基本事实记录的匹配文本到语音系统相比,我们的模型能够生成更自然的语音,因为它可以访问嘈杂语音中的真实韵律。与两种去噪系统Oracle Wiener Mask和基于DNN的Mask Predictor相比,我们的模型在主观质量和可理解性方面与Oracle Wiener Mask相当,超越了现实系统。基于声码器的上界表明,这种方法在OracleWiener掩模之外还有改进的空间。我们用两个发言者测试发言者的依赖性,结果表明一个模型可以用于多个发言者。
URL
https://arxiv.org/abs/1904.01537