Abstract
Video prediction (VP) generates future frames by leveraging spatial representations and temporal context from past frames. Traditional recurrent neural network (RNN)-based models enhance memory cell structures to capture spatiotemporal states over extended durations but suffer from gradual loss of object appearance details. To address this issue, we propose the strong recollection VP (SRVP) model, which integrates standard attention (SA) and reinforced feature attention (RFA) modules. Both modules employ scaled dot-product attention to extract temporal context and spatial correlations, which are then fused to enhance spatiotemporal representations. Experiments on three benchmark datasets demonstrate that SRVP mitigates image quality degradation in RNN-based models while achieving predictive performance comparable to RNN-free architectures.
Abstract (translated)
视频预测(VP)通过利用过去帧的空间表示和时间上下文来生成未来的画面。传统的基于递归神经网络(RNN)的模型通过增强记忆单元结构以捕捉长时间跨度内的时空状态,但随着时间推移会逐渐丧失物体外观细节。为了解决这一问题,我们提出了强回忆视频预测(SRVP)模型,该模型集成了标准注意力(SA)和强化特征注意(RFA)模块。这两个模块均采用缩放点积注意力机制来提取时间上下文和空间相关性,并将这些信息融合起来以增强时空表示。在三个基准数据集上的实验表明,SRVP能够在不使用RNN的情况下同时减轻基于RNN的模型中的图像质量下降问题,并达到与无RNN架构相当的预测性能。
URL
https://arxiv.org/abs/2504.08012