Abstract
We investigate whisper-to-natural-speech conversion using sequence-to-sequence approach by proposing modified transformer architecture. We investigate different features like mel frequency cepstral coefficients and smoothed spectral features. The proposed networks are trained end-to-end using supervised approach for feature-to-feature transformation. Further, We also investigate the effectiveness of embedded auxiliary decoder used after N encoder sub-layers, and is trained with the frame-level objective function for identifying source phoneme labels. We show results on wTIMIT and CHAINS datasets by measuring word error rate using end-to-end ASR and also BLEU scores for the generated speech. In addition, we measure spectral shape of it by measuring formant distributions w.r.t. the reference speech, as formant divergence metric. We have found whisper-to-natural converted speech formants probability distribution is similar to the ground-truth distribution. To the authors' best knowledge, this is the first time modified transformer has been applied for whisper-to-natural-speech conversion and vice versa.
Abstract (translated)
URL
https://arxiv.org/abs/2004.09347