Abstract
The recent surge in popularity of diffusion models for image generation has brought new attention to the potential of these models in other areas of media synthesis. One area that has yet to be fully explored is the application of diffusion models to music generation. Music generation requires to handle multiple aspects, including the temporal dimension, long-term structure, multiple layers of overlapping sounds, and nuances that only trained listeners can detect. In our work, we investigate the potential of diffusion models for text-conditional music generation. We develop a cascading latent diffusion approach that can generate multiple minutes of high-quality stereo music at 48kHz from textual descriptions. For each model, we make an effort to maintain reasonable inference speed, targeting real-time on a single consumer GPU. In addition to trained models, we provide a collection of open-source libraries with the hope of facilitating future work in the field. We open-source the following: - Music samples for this paper: this https URL - All music samples for all models: this https URL - Codes: this https URL
Abstract (translated)
最近的图像处理模型的广泛应用引起了对扩散模型在媒体合成其他领域的潜在关注。一个仍未充分探索的领域是使用扩散模型进行音乐生成。音乐生成需要处理多个方面,包括时间维度、长期结构、多层重叠的声音以及只有经过训练的听众才能识别的微妙差异。在我们的研究中,我们探讨了扩散模型在文本条件音乐生成方面的潜力。我们开发了一种扩散隐状态的方法,可以从文本描述中生成多个分钟的高质量对置音乐,频率为48kHz。对于每个模型,我们努力维持合理的推断速度,并 targeting real-time on a single consumer GPU。除了训练模型,我们提供了一组开源库,希望为该领域未来的工作提供便利。我们开源了以下内容: - 本论文的音乐样本: this https URL - 对所有模型的所有音乐样本: this https URL - 代码: this https URL
URL
https://arxiv.org/abs/2301.11757