Paper Reading AI Learner

Video Diffusion Models with Local-Global Context Guidance

2023-06-05 03:32:27
Siyuan Yang, Lu Zhang, Yu Liu, Zhizhuo Jiang, You He

Abstract

Diffusion models have emerged as a powerful paradigm in video synthesis tasks including prediction, generation, and interpolation. Due to the limitation of the computational budget, existing methods usually implement conditional diffusion models with an autoregressive inference pipeline, in which the future fragment is predicted based on the distribution of adjacent past frames. However, only the conditions from a few previous frames can't capture the global temporal coherence, leading to inconsistent or even outrageous results in long-term video prediction. In this paper, we propose a Local-Global Context guided Video Diffusion model (LGC-VD) to capture multi-perception conditions for producing high-quality videos in both conditional/unconditional settings. In LGC-VD, the UNet is implemented with stacked residual blocks with self-attention units, avoiding the undesirable computational cost in 3D Conv. We construct a local-global context guidance strategy to capture the multi-perceptual embedding of the past fragment to boost the consistency of future prediction. Furthermore, we propose a two-stage training strategy to alleviate the effect of noisy frames for more stable predictions. Our experiments demonstrate that the proposed method achieves favorable performance on video prediction, interpolation, and unconditional video generation. We release code at this https URL.

Abstract (translated)

扩散模型在视频合成任务中已经成为一种强大的范式,包括预测、生成和插值。由于计算预算的限制,现有方法通常使用条件扩散模型并结合自回归推理管道来实现,其中未来片段是根据相邻过去帧的分布预测的。然而,只有前几个相邻帧的条件不能捕捉全局时间一致性,导致长期视频预测结果不一致甚至恶化。在本文中,我们提出了一种Local-Global Context guided Video Diffusion模型(LGC-VD),以捕捉多种感知条件,以在条件/无条件设置下生产高质量的视频。在LGC-VD中,使用堆叠的残留块和注意力单元来实现UNet,避免了3DConv的不想要的计算成本。我们建立了一种Local-Global Context guidance策略,以捕捉过去片段的多种感知嵌入,以增强未来预测的一致性。此外,我们提出了一种两阶段的训练策略,以减轻噪声帧的影响,以更稳定的预测。我们的实验表明,该方法在视频预测、插值和无条件视频生成方面取得了有利的性能。我们在这个httpsURL上发布了代码。

URL

https://arxiv.org/abs/2306.02562

PDF

https://arxiv.org/pdf/2306.02562.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot