Abstract
Existing long-term video prediction methods often rely on an autoregressive video prediction mechanism. However, this approach suffers from error propagation, particularly in distant future frames. To address this limitation, this paper proposes the first AutoRegression-Free (ARFree) video prediction framework using diffusion models. Different from an autoregressive video prediction mechanism, ARFree directly predicts any future frame tuples from the context frame tuple. The proposed ARFree consists of two key components: 1) a motion prediction module that predicts a future motion using motion feature extracted from the context frame tuple; 2) a training method that improves motion continuity and contextual consistency between adjacent future frame tuples. Our experiments with two benchmark datasets show that the proposed ARFree video prediction framework outperforms several state-of-the-art video prediction methods.
Abstract (translated)
现有的长期视频预测方法通常依赖于自回归的视频预测机制。然而,这种方法在处理遥远未来的帧时会遭受误差传播的问题。为了解决这一局限性,本文提出了第一个无自回归(ARFree)的视频预测框架,该框架使用扩散模型。不同于自回归视频预测机制,ARFree可以直接从上下文帧元组中预测任意未来帧元组。 所提出的ARFree包含两个关键组成部分:1) 一个运动预测模块,该模块利用从上下文帧元组提取的运动特征来预测未来的运动;2) 一种训练方法,这种方法可以改善相邻未来帧元组之间的运动连续性和上下文一致性。我们的实验表明,在两个基准数据集上,所提出的ARFree视频预测框架优于几种最新的视频预测方法。
URL
https://arxiv.org/abs/2505.22111