Abstract
We propose a new object-centric video prediction algorithm based on the deep latent particle (DLP) representation. In comparison to existing slot- or patch-based representations, DLPs model the scene using a set of keypoints with learned parameters for properties such as position and size, and are both efficient and interpretable. Our method, deep dynamic latent particles (DDLP), yields state-of-the-art object-centric video prediction results on several challenging datasets. The interpretable nature of DDLP allows us to perform ``what-if'' generation -- predict the consequence of changing properties of objects in the initial frames, and DLP's compact structure enables efficient diffusion-based unconditional video generation. Videos, code and pre-trained models are available: this https URL
Abstract (translated)
我们提出了一种新的基于深度潜在粒子(DLP)表示的物体中心视频预测算法。与现有的基于块或补丁的表示方法相比,DLP使用一组具有学习参数的位置和大小属性的关键点来建模场景,既高效又可解释。我们的算法是深度动态潜在粒子(DDLP),在多个具有挑战性的dataset上取得了物体中心视频预测的最新成果。DDLP可解释性强的特性使我们能够进行“如果”生成,即预测初始帧中物体属性变化的后果,而DLP的紧凑结构实现了高效的扩散式无条件视频生成。视频、代码和预训练模型已可用,以下是httpsURL。
URL
https://arxiv.org/abs/2306.05957