Paper Reading AI Learner

S2DM: Sector-Shaped Diffusion Models for Video Generation

2024-03-20 08:50:15
Haoran Lang, Yuxuan Ge, Zheng Tian

Abstract

Diffusion models have achieved great success in image generation. However, when leveraging this idea for video generation, we face significant challenges in maintaining the consistency and continuity across video frames. This is mainly caused by the lack of an effective framework to align frames of videos with desired temporal features while preserving consistent semantic and stochastic features. In this work, we propose a novel Sector-Shaped Diffusion Model (S2DM) whose sector-shaped diffusion region is formed by a set of ray-shaped reverse diffusion processes starting at the same noise point. S2DM can generate a group of intrinsically related data sharing the same semantic and stochastic features while varying on temporal features with appropriate guided conditions. We apply S2DM to video generation tasks, and explore the use of optical flow as temporal conditions. Our experimental results show that S2DM outperforms many existing methods in the task of video generation without any temporal-feature modelling modules. For text-to-video generation tasks where temporal conditions are not explicitly given, we propose a two-stage generation strategy which can decouple the generation of temporal features from semantic-content features. We show that, without additional training, our model integrated with another temporal conditions generative model can still achieve comparable performance with existing works. Our results can be viewd at this https URL.

Abstract (translated)

扩散模型在图像生成方面取得了巨大的成功。然而,在将这一想法应用于视频生成时,我们在保持帧之间的一致性和连续性方面面临着巨大的挑战。这主要是由缺乏一个有效的框架来在保留一致的语义和随机特征的同时,将视频帧与所需的时间特征对齐所引起的。在这项工作中,我们提出了一个新颖的Sector-Shaped Diffusion Model(S2DM),其扩散区域由一组以相同噪声点为起点的凸形反扩散过程形成。S2DM可以在具有相同语义和随机特征的一组数据中生成一组内插关系。我们将S2DM应用于视频生成任务,并探讨了使用光流作为时间条件。我们的实验结果表明,在没有任何时间特征建模模块的情况下,S2DM在视频生成任务中优于许多现有方法。对于没有明确给出时间条件的文本到视频生成任务,我们提出了一个两阶段生成策略,可以将生成时间特征与语义内容特征解耦。我们证明了,在没有额外训练的情况下,我们与另一个时间条件生成模型集成的模型可以与现有工作达到相当不错的性能。我们的结果可以在以下链接查看:https://url.cn/

URL

https://arxiv.org/abs/2403.13408

PDF

https://arxiv.org/pdf/2403.13408.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot