Abstract
Recent advancements in video saliency prediction (VSP) have shown promising performance compared to the human visual system, whose emulation is the primary goal of VSP. However, current state-of-the-art models employ spatio-temporal transformers trained on limited amounts of data, hindering generalizability adaptation to downstream tasks. The benefits of vision foundation models present a potential solution to improve the VSP process. However, adapting image foundation models to the video domain presents significant challenges in modeling scene dynamics and capturing temporal information. To address these challenges, and as the first initiative to design a VSP model based on video foundation models, we introduce SalFoM, a novel encoder-decoder video transformer architecture. Our model employs UnMasked Teacher (UMT) as feature extractor and presents a heterogeneous decoder which features a locality-aware spatio-temporal transformer and integrates local and global spatio-temporal information from various perspectives to produce the final saliency map. Our qualitative and quantitative experiments on the challenging VSP benchmark datasets of DHF1K, Hollywood-2 and UCF-Sports demonstrate the superiority of our proposed model in comparison with the state-of-the-art methods.
Abstract (translated)
近年来,在视频突显预测(VSP)方面的先进进展已经表明,与人类视觉系统相比,其模拟是VSP的主要目标,具有良好的性能。然而,当前最先进的模型使用有限数据训练的时空变换器,阻碍了向下游任务的泛化适应。视觉基础模型的优势为改进VSP过程提供了潜在解决方案。然而,将图像基础模型适应视频领域在建模场景动态和捕捉时间信息方面带来了巨大的挑战。为了应对这些挑战,作为基于视频基础模型的VSP模型的第一个设计,我们引入了SalFoM,一种新颖的编码器-解码器视频转换器架构。我们的模型采用去遮罩教师(UMT)作为特征提取器,并呈现了一个具有局部感知时空变换器的异构解码器,从各种角度集成局部和全局时空信息,产生最终的视频突显图。我们对具有挑战性的VSP基准数据集DHF1K、Hollywood-2和UCF-Sports的实验结果表明,与最先进的方法相比,我们提出的模型具有卓越的性能。
URL
https://arxiv.org/abs/2404.03097