Abstract
The recent surge in popularity of diffusion models for image generation has brought new attention to the potential of these models in other areas of media generation. One area that has yet to be fully explored is the application of diffusion models to audio generation. Audio generation requires an understanding of multiple aspects, such as the temporal dimension, long term structure, multiple layers of overlapping sounds, and the nuances that only trained listeners can detect. In this work, we investigate the potential of diffusion models for audio generation. We propose a set of models to tackle multiple aspects, including a new method for text-conditional latent audio diffusion with stacked 1D U-Nets, that can generate multiple minutes of music from a textual description. For each model, we make an effort to maintain reasonable inference speed, targeting real-time on a single consumer GPU. In addition to trained models, we provide a collection of open source libraries with the hope of simplifying future work in the field. Samples can be found at this https URL. Codes are at this https URL.
Abstract (translated)
最近的影像生成模型的流行性引起了对它们在媒体生成其他领域的潜力的新关注。一个尚未完全探索的领域是将这些模型应用于音频生成。音频生成需要对多个方面的理解,例如时间维度、长期结构、重叠声音的多个层以及只有经过训练的听众才能识别的微妙差异。在这项工作中,我们研究了影像生成模型对音频生成的潜力。我们提出了一组模型来处理多个方面,包括一种基于文本条件隐晦音频扩散的新方法,该方法使用Stacked 1D U-Nets,可以从文本描述中生成多个分钟的音乐作品。对于每个模型,我们尽力维持合理的推断速度,旨在在单个消费GPU上实现实时。除了训练模型之外,我们提供了一组开源库,希望简化未来该领域的工作。样本可以在这个httpsURL上找到。代码在这个httpsURL上。
URL
https://arxiv.org/abs/2301.13267