Abstract
Motion, scene and object are three primary visual components of a video. In particular, objects represent the foreground, scenes represent the background, and motion traces their dynamics. Based on this insight, we propose a two-stage MOtion, Scene and Object decomposition framework (MOSO) for video prediction, consisting of MOSO-VQVAE and MOSO-Transformer. In the first stage, MOSO-VQVAE decomposes a previous video clip into the motion, scene and object components, and represents them as distinct groups of discrete tokens. Then, in the second stage, MOSO-Transformer predicts the object and scene tokens of the subsequent video clip based on the previous tokens and adds dynamic motion at the token level to the generated object and scene tokens. Our framework can be easily extended to unconditional video generation and video frame interpolation tasks. Experimental results demonstrate that our method achieves new state-of-the-art performance on five challenging benchmarks for video prediction and unconditional video generation: BAIR, RoboNet, KTH, KITTI and UCF101. In addition, MOSO can produce realistic videos by combining objects and scenes from different videos.
Abstract (translated)
运动、场景和对象是视频的三个主要视觉组成部分。特别是,对象代表前景,场景代表背景,而运动则记录了它们的动态。基于这一见解,我们提出了一个两阶段的运动、场景和对象分解框架(MOSO),由MOSO-VQVAE和MOSO-Transformer组成。在第一阶段,MOSO-VQVAE将先前的视频片段分解为运动、场景和对象组成部分,并将它们表示为独立的离散代币群组。然后在第二阶段,MOSO-Transformer基于先前代币预测后续视频片段中的物体和场景代币,并在代币级别上添加动态运动。我们的框架可以轻松扩展到无条件视频生成和视频帧插值任务。实验结果显示,我们的方法和在视频预测和无条件视频生成五个挑战基准上的新高性能:BAIR、RoboNet、KTH、KITTI和UCF101。此外,MOSO可以通过结合来自不同视频的对象和场景来产生真实的视频。
URL
https://arxiv.org/abs/2303.03684