Paper Reading AI Learner

MOSO: Decomposing MOtion, Scene and Object for Video Prediction

2023-03-07 06:54:48
Mingzhen Sun, Weining Wang, Xinxin Zhu, Jing Liu

Abstract

Motion, scene and object are three primary visual components of a video. In particular, objects represent the foreground, scenes represent the background, and motion traces their dynamics. Based on this insight, we propose a two-stage MOtion, Scene and Object decomposition framework (MOSO) for video prediction, consisting of MOSO-VQVAE and MOSO-Transformer. In the first stage, MOSO-VQVAE decomposes a previous video clip into the motion, scene and object components, and represents them as distinct groups of discrete tokens. Then, in the second stage, MOSO-Transformer predicts the object and scene tokens of the subsequent video clip based on the previous tokens and adds dynamic motion at the token level to the generated object and scene tokens. Our framework can be easily extended to unconditional video generation and video frame interpolation tasks. Experimental results demonstrate that our method achieves new state-of-the-art performance on five challenging benchmarks for video prediction and unconditional video generation: BAIR, RoboNet, KTH, KITTI and UCF101. In addition, MOSO can produce realistic videos by combining objects and scenes from different videos.

Abstract (translated)

运动、场景和对象是视频的三个主要视觉组成部分。特别是,对象代表前景,场景代表背景,而运动则记录了它们的动态。基于这一见解,我们提出了一个两阶段的运动、场景和对象分解框架(MOSO),由MOSO-VQVAE和MOSO-Transformer组成。在第一阶段,MOSO-VQVAE将先前的视频片段分解为运动、场景和对象组成部分,并将它们表示为独立的离散代币群组。然后在第二阶段,MOSO-Transformer基于先前代币预测后续视频片段中的物体和场景代币,并在代币级别上添加动态运动。我们的框架可以轻松扩展到无条件视频生成和视频帧插值任务。实验结果显示,我们的方法和在视频预测和无条件视频生成五个挑战基准上的新高性能:BAIR、RoboNet、KTH、KITTI和UCF101。此外,MOSO可以通过结合来自不同视频的对象和场景来产生真实的视频。

URL

https://arxiv.org/abs/2303.03684

PDF

https://arxiv.org/pdf/2303.03684.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot