Abstract
Diffusion models have emerged as a powerful paradigm in video synthesis tasks including prediction, generation, and interpolation. Due to the limitation of the computational budget, existing methods usually implement conditional diffusion models with an autoregressive inference pipeline, in which the future fragment is predicted based on the distribution of adjacent past frames. However, only the conditions from a few previous frames can't capture the global temporal coherence, leading to inconsistent or even outrageous results in long-term video prediction. In this paper, we propose a Local-Global Context guided Video Diffusion model (LGC-VD) to capture multi-perception conditions for producing high-quality videos in both conditional/unconditional settings. In LGC-VD, the UNet is implemented with stacked residual blocks with self-attention units, avoiding the undesirable computational cost in 3D Conv. We construct a local-global context guidance strategy to capture the multi-perceptual embedding of the past fragment to boost the consistency of future prediction. Furthermore, we propose a two-stage training strategy to alleviate the effect of noisy frames for more stable predictions. Our experiments demonstrate that the proposed method achieves favorable performance on video prediction, interpolation, and unconditional video generation. We release code at this https URL.
Abstract (translated)
扩散模型在视频合成任务中已经成为一种强大的范式,包括预测、生成和插值。由于计算预算的限制,现有方法通常使用条件扩散模型并结合自回归推理管道来实现,其中未来片段是根据相邻过去帧的分布预测的。然而,只有前几个相邻帧的条件不能捕捉全局时间一致性,导致长期视频预测结果不一致甚至恶化。在本文中,我们提出了一种Local-Global Context guided Video Diffusion模型(LGC-VD),以捕捉多种感知条件,以在条件/无条件设置下生产高质量的视频。在LGC-VD中,使用堆叠的残留块和注意力单元来实现UNet,避免了3DConv的不想要的计算成本。我们建立了一种Local-Global Context guidance策略,以捕捉过去片段的多种感知嵌入,以增强未来预测的一致性。此外,我们提出了一种两阶段的训练策略,以减轻噪声帧的影响,以更稳定的预测。我们的实验表明,该方法在视频预测、插值和无条件视频生成方面取得了有利的性能。我们在这个httpsURL上发布了代码。
URL
https://arxiv.org/abs/2306.02562