Abstract
Audio-based generative models for music have seen great strides recently, but so far have not managed to produce full-length music tracks with coherent musical structure. We show that by training a generative model on long temporal contexts it is possible to produce long-form music of up to 4m45s. Our model consists of a diffusion-transformer operating on a highly downsampled continuous latent representation (latent rate of 21.5Hz). It obtains state-of-the-art generations according to metrics on audio quality and prompt alignment, and subjective tests reveal that it produces full-length music with coherent structure.
Abstract (translated)
近年来,基于音频的生成模型在音乐领域取得了显著的进步,但目前尚无模型能够生成具有连贯音乐结构的完整长篇音乐。我们证明了,通过在长时依赖的上下文中训练一个生成模型,可以生成长达4分45秒的连贯音乐。我们的模型由一个扩散-Transformer组成,它在高度降采样后的连续潜在表示上运行(潜在率21.5Hz)。根据音频质量和提示对齐的指标,它获得了最先进的生成,主观测试也表明它具有连贯的音乐结构。
URL
https://arxiv.org/abs/2404.10301