Paper Reading AI Learner

ViDaS Video Depth-aware Saliency Network

2023-05-19 15:04:49
Ioanna Diamanti, Antigoni Tsiami, Petros Koutras, Petros Maragos

Abstract

We introduce ViDaS, a two-stream, fully convolutional Video, Depth-Aware Saliency network to address the problem of attention modeling ``in-the-wild", via saliency prediction in videos. Contrary to existing visual saliency approaches using only RGB frames as input, our network employs also depth as an additional modality. The network consists of two visual streams, one for the RGB frames, and one for the depth frames. Both streams follow an encoder-decoder approach and are fused to obtain a final saliency map. The network is trained end-to-end and is evaluated in a variety of different databases with eye-tracking data, containing a wide range of video content. Although the publicly available datasets do not contain depth, we estimate it using three different state-of-the-art methods, to enable comparisons and a deeper insight. Our method outperforms in most cases state-of-the-art models and our RGB-only variant, which indicates that depth can be beneficial to accurately estimating saliency in videos displayed on a 2D screen. Depth has been widely used to assist salient object detection problems, where it has been proven to be very beneficial. Our problem though differs significantly from salient object detection, since it is not restricted to specific salient objects, but predicts human attention in a more general aspect. These two problems not only have different objectives, but also different ground truth data and evaluation metrics. To our best knowledge, this is the first competitive deep learning video saliency estimation approach that combines both RGB and Depth features to address the general problem of saliency estimation ``in-the-wild". The code will be publicly released.

Abstract (translated)

我们介绍了ViDaS,一个双向、全卷积视频、深度感知注意力网络,通过在视频中进行注意力预测来解决“在野外”的注意力建模问题。与只使用RGB帧作为输入的传统视觉注意力方法不同,我们的网络还使用了深度作为额外的特征。网络由两个视觉流组成,一个用于RGB帧,另一个用于深度帧。两个流采用编码-解码方法,并融合成最终的的注意力地图。网络通过端到端的训练进行训练,并在不同的数据库中评估使用 eye-tracking 数据,包含广泛的视频内容。尽管公开可用的数据集不包括深度,我们使用三种最先进的方法估计它,以便进行比较和更深入的理解。在我们的大多数情况下,最先进的模型和我们只使用RGB帧的变体相比表现良好,这表明深度对在二维屏幕上显示的视频精确估计注意力可以有益处。深度已经广泛用于协助注意力检测问题,并证明这是非常有益的。但我们的问题与注意力检测问题有很大不同,因为它不仅限制特定的吸引人的注意力物体,而是在更一般的方向预测人类注意力。这两个问题不仅有不同的目标,而且有不同的真实值数据和评估指标。据我们所知,这是第一个竞争的深度学习视频注意力估计方法,将RGB和深度特征相结合,以解决“在野外”的注意力估计通用问题。代码将公开发布。

URL

https://arxiv.org/abs/2305.11729

PDF

https://arxiv.org/pdf/2305.11729.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot