Abstract
Recent progress in deep generative models has improved the quality of neural vocoders in speech domain. However, it remains challenging to generate high-quality singing voice due to a wider variety of musical expressions in pitch, loudness, and pronunciations. In this work, we propose a hierarchical diffusion model for singing voice neural vocoders. The proposed method consists of multiple diffusion models operating in different sampling rates; the model at the lowest sampling rate focuses on generating accurate low frequency components such as pitch, and other models progressively generate the waveform at the higher sampling rates based on the data at the lower sampling rate and acoustic features. Experimental results show that the proposed method produces high-quality singing voice for multiple singers, outperforming state-of-the-art neural vocoders with a similar range of computational costs.
Abstract (translated)
URL
https://arxiv.org/abs/2210.07508