Abstract
In this paper, we propose NUWA-XL, a novel Diffusion over Diffusion architecture for eXtremely Long video generation. Most current work generates long videos segment by segment sequentially, which normally leads to the gap between training on short videos and inferring long videos, and the sequential generation is inefficient. Instead, our approach adopts a ``coarse-to-fine'' process, in which the video can be generated in parallel at the same granularity. A global diffusion model is applied to generate the keyframes across the entire time range, and then local diffusion models recursively fill in the content between nearby frames. This simple yet effective strategy allows us to directly train on long videos (3376 frames) to reduce the training-inference gap, and makes it possible to generate all segments in parallel. To evaluate our model, we build FlintstonesHD dataset, a new benchmark for long video generation. Experiments show that our model not only generates high-quality long videos with both global and local coherence, but also decreases the average inference time from 7.55min to 26s (by 94.26\%) at the same hardware setting when generating 1024 frames. The homepage link is \url{this https URL}
Abstract (translated)
在本文中,我们提出了 NUWA-XL,一种 novel Diffusion over Diffusion 架构,用于极端Long视频生成的新方法。当前大多数工作都是按照片段序列依次生成Long视频片段,这通常会导致训练视频和推断Long视频之间的间隔,Sequential generation 效率低。相反,我们采用了一种“粗到细”的过程,即视频可以在相同的粒度级别上同时生成。全球扩散模型被应用来生成整个时间范围内的关键点,然后当地扩散模型则递归地填充相邻帧之间的内容。这个简单而有效的策略使我们能够直接训练(生成3376帧的)Long视频,以减少训练-推断差距,并使所有片段能够同时生成。为了评估我们的模型,我们建立了FlintstonesHD数据集,成为Long视频生成新的基准。实验表明,我们的模型不仅生成具有全球和 local 一致性高质量的Long视频,而且当生成1024帧时,在相同的硬件设置下,平均推断时间从7.55分钟减少到26秒(94.26%)。主页链接为 \url{this https URL}。
URL
https://arxiv.org/abs/2303.12346