Abstract
Previous studies on music style transfer have mainly focused on one-to-one style conversion, which is relatively limited. When considering the conversion between multiple styles, previous methods required designing multiple modes to disentangle the complex style of the music, resulting in large computational costs and slow audio generation. The existing music style transfer methods generate spectrograms with artifacts, leading to significant noise in the generated audio. To address these issues, this study proposes a music style transfer framework based on diffusion models (DM) and uses spectrogram-based methods to achieve multi-to-multi music style transfer. The GuideDiff method is used to restore spectrograms to high-fidelity audio, accelerating audio generation speed and reducing noise in the generated audio. Experimental results show that our model has good performance in multi-mode music style transfer compared to the baseline and can generate high-quality audio in real-time on consumer-grade GPUs.
Abstract (translated)
之前关于音乐风格迁移的研究主要集中在一对一的风格转换,这相对有限。当考虑多种风格之间的转换时,以前的方法需要设计多个模式来区分音乐的复杂风格,导致计算成本较大且音频生成速度较慢。现有的音乐风格迁移方法生成的频谱图存在伪影,导致生成的音频中存在较大噪声。为了解决这些问题,本研究基于扩散模型(DM)提出了一种音乐风格迁移框架,并使用频谱图为基础实现多对多音乐风格迁移。GuideDiff方法被用于将频谱图恢复到高保真音频,加速音频生成速度并减少生成的音频中的噪声。实验结果表明,与基线相比,我们的模型在多模式音乐风格迁移方面具有较好的性能,可以在消费级GPU上实时生成高质量音频。
URL
https://arxiv.org/abs/2404.14771