Paper Reading AI Learner

Top-down Visual Saliency Guided by Captions

2017-04-12 22:49:47
Vasili Ramanishka, Abir Das, Jianming Zhang, Kate Saenko

Abstract

Neural image/video captioning models can generate accurate descriptions, but their internal process of mapping regions to words is a black box and therefore difficult to explain. Top-down neural saliency methods can find important regions given a high-level semantic task such as object classification, but cannot use a natural language sentence as the top-down input for the task. In this paper, we propose Caption-Guided Visual Saliency to expose the region-to-word mapping in modern encoder-decoder networks and demonstrate that it is learned implicitly from caption training data, without any pixel-level annotations. Our approach can produce spatial or spatiotemporal heatmaps for both predicted captions, and for arbitrary query sentences. It recovers saliency without the overhead of introducing explicit attention layers, and can be used to analyze a variety of existing model architectures and improve their design. Evaluation on large-scale video and image datasets demonstrates that our approach achieves comparable captioning performance with existing methods while providing more accurate saliency heatmaps. Our code is available at visionlearninggroup.github.io/caption-guided-saliency/.

Abstract (translated)

神经图像/视频字幕模型可以生成准确的描述,但是它们将区域映射到单词的内部过程是一个黑盒子,因此很难解释。自上而下的神经显着方法可以发现重要的区域,如对象分类等高级语义任务,但不能使用自然语言语句作为任务的自上而下输入。在本文中,我们提出Caption-Guided Visual Saliency以揭示现代编码器 - 解码器网络中的区域 - 字面映射,并证明它是从字幕训练数据中隐式获知的,没有任何像素级注释。我们的方法可以为预测字幕和任意查询语句生成空间或时空热图。它可以在不引入明确关注层的情况下恢复显着性,并可用于分析各种现有模型架构并改进其设计。对大规模视频和图像数据集的评估表明,我们的方法在提供更准确的显着热点的同时,利用现有方法实现了可比较的字幕性能。我们的代码可在visionlearninggroup.github.io/caption-guided-saliency/上找到。

URL

https://arxiv.org/abs/1612.07360

PDF

https://arxiv.org/pdf/1612.07360.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot