Paper Reading AI Learner

Multi-source weak supervision for saliency detection

2019-04-01 05:19:19
Yu Zeng, Yunzhi Zhuge, Huchuan Lu, Lihe Zhang, Mingyang Qian, Yizhou Yu

Abstract

The high cost of pixel-level annotations makes it appealing to train saliency detection models with weak supervision. However, a single weak supervision source usually does not contain enough information to train a well-performing model. To this end, we propose a unified framework to train saliency detection models with diverse weak supervision sources. In this paper, we use category labels, captions, and unlabelled data for training, yet other supervision sources can also be plugged into this flexible framework. We design a classification network (CNet) and a caption generation network (PNet), which learn to predict object categories and generate captions, respectively, meanwhile highlight the most important regions for corresponding tasks. An attention transfer loss is designed to transmit supervision signal between networks, such that the network designed to be trained with one supervision source can benefit from another. An attention coherence loss is defined on unlabelled data to encourage the networks to detect generally salient regions instead of task-specific regions. We use CNet and PNet to generate pixel-level pseudo labels to train a saliency prediction network (SNet). During the testing phases, we only need SNet to predict saliency maps. Experiments demonstrate the performance of our method compares favourably against unsupervised and weakly supervised methods and even some supervised methods.

Abstract (translated)

像素级标注的高成本使得训练具有弱监督性的显著性检测模型成为一种很有吸引力的方法。然而,单个薄弱的监管源通常不包含足够的信息来训练一个性能良好的模型。为此,我们提出了一个统一的框架来训练具有不同弱监督源的显著性检测模型。在本文中,我们使用类别标签、标题和未标记的数据进行培训,而其他的监督源也可以插入到这个灵活的框架中。我们设计了一个分类网络(CNET)和标题生成网络(PNET),分别学习预测对象类别和生成标题,同时突出相应任务最重要的区域。为了在网络之间传输监控信号,设计了一种注意力转移损失,使得设计用于训练一个监控源的网络可以从另一个监控源获益。在未标记的数据上定义了注意一致性损失,以鼓励网络检测一般的显著区域而不是特定任务区域。我们使用CNET和PNET生成像素级伪标签来训练显著性预测网络(SNET)。在测试阶段,我们只需要snet来预测显著性图。实验表明,该方法与无监督和弱监督方法,甚至某些监督方法相比,具有较好的性能。

URL

https://arxiv.org/abs/1904.00566

PDF

https://arxiv.org/pdf/1904.00566.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot