Abstract
Large-scale multimodal generative modeling has created milestones in text-to-image and text-to-video generation. Its application to audio still lags behind for two main reasons: the lack of large-scale datasets with high-quality text-audio pairs, and the complexity of modeling long continuous audio data. In this work, we propose Make-An-Audio with a prompt-enhanced diffusion model that addresses these gaps by 1) introducing pseudo prompt enhancement with a distill-then-reprogram approach, it alleviates data scarcity with orders of magnitude concept compositions by using language-free audios; 2) leveraging spectrogram autoencoder to predict the self-supervised audio representation instead of waveforms. Together with robust contrastive language-audio pretraining (CLAP) representations, Make-An-Audio achieves state-of-the-art results in both objective and subjective benchmark evaluation. Moreover, we present its controllability and generalization for X-to-Audio with "No Modality Left Behind", for the first time unlocking the ability to generate high-definition, high-fidelity audios given a user-defined modality input. Audio samples are available at this https URL
Abstract (translated)
大规模多模态生成模型在文本到图像和文本到视频的生成方面创造了里程碑。将其应用于音频仍面临两个主要问题:缺乏大规模高质量的文本到音频对偶数据,以及建模长期连续音频数据的复杂性。在这个研究中,我们提出了一个prompt-enhanced扩散模型,以解决这些差距。通过1)引入伪prompt增强方法,通过分块后重新编程方法,利用无语言音频数据缓解数据稀缺性,以orders of magnitude的概念组合;2)利用Spectrogram自动编码器预测自监督音频表示,而不是波形图。与 robust Contrastive语言-音频预处理(CLAP)表示一起,Make-An-Audio在客观和主观基准评估方面取得了最先进的结果。此外,我们提出了X到音频的可控制性和泛化,首次解锁了用户定义模态输入生成高清晰度高保真度音频的能力。音频样本可在该httpsURL上提供。
URL
https://arxiv.org/abs/2301.12661