Abstract
We propose a novel framework for the task of object-centric video prediction, i.e., extracting the compositional structure of a video sequence, as well as modeling objects dynamics and interactions from visual observations in order to predict the future object states, from which we can then generate subsequent video frames. With the goal of learning meaningful spatio-temporal object representations and accurately forecasting object states, we propose two novel object-centric video predictor (OCVP) transformer modules, which decouple the processing of temporal dynamics and object interactions, thus presenting an improved prediction performance. In our experiments, we show how our object-centric prediction framework utilizing our OCVP predictors outperforms object-agnostic video prediction models on two different datasets, while maintaining consistent and accurate object representations.
Abstract (translated)
我们提出了一种新的框架,用于对象中心的视频预测任务,即从视频序列中抽取构成性结构,并基于视觉观察建模对象动态和交互,以预测未来对象状态,并从该状态中生成后续视频帧。为了实现学习有意义的空间时间对象表示以及准确预测对象状态的目标,我们提出了两个全新的对象中心视频预测(OCVP)Transformer模块,它们将时间动态处理与对象交互处理分离,从而提高了预测性能。在我们的实验中,我们展示了如何使用我们的OCVP预测器我们的对象中心预测框架在两个不同数据集上优于对象无关的视频预测模型,同时保持了一致性和准确的对象表示。
URL
https://arxiv.org/abs/2302.11850