Paper Reading AI Learner

OCNet: Object Context Network for Scene Parsing

2018-09-04 12:22:10
Yuhui Yuan, Jingdong Wang

Abstract

Context is essential for various computer vision tasks. The state-of-the-art scene parsing methods have exploited the effectiveness of the context defined over image-level. Such context carries the mixture of objects belonging to different categories. According to that the label of each pixel $\mathit{P}$ is defined as the category of the object it belongs to, we propose the pixel-wise Object Context that consists of the objects belonging to the same category with pixel $\mathit{P}$. The representation of pixel $\mathit{P}$'s object context is the aggregation of all the features that belong to the pixels sharing the same category with $\mathit{P}$. Since the ground truth objects that the pixel $\mathit{P}$ belonging to is unavailable, we employ the self-attention method to approximate the objects by learning a pixel-wise similarity map. We further propose the Pyramid Object Context and Atrous Spatial Pyramid Object Context to capture context of multiple scales. Based on the object context, we introduce the OCNet and show that OCNet achieves state-of-the-art performance on both Cityscapes benchmark and ADE20K benchmark. The code of OCNet will be made available at https://github.com/PkuRainBow/OCNet.

Abstract (translated)

上下文对于各种计算机视觉任务至关重要最先进的场景解析方法利用了在图像级别定义的上下文的有效性。这种上下文携带属于不同类别的对象的混合。  根据每个像素$ \ mathit {P} $的标签被定义为它所属的对象的类别,我们提出像素方式的对象上下文,它由属于同一类别的对象组成,带有像素$ \ mathit {P} $。像素$ \ mathit {P} $的对象上下文的表示是属于与$ \ mathit {P} $共享相同类别的像素的所有特征的聚合。由于属于的像素$ \ mathit {P} $的基础事实对象不可用,我们采用自我关注方法通过学习像素相似度图来近似对象。  我们进一步提出金字塔对象上下文和Atrous空间金字塔对象上下文来捕获多个尺度的上下文。基于对象上下文,我们介绍了OCNet,并表明OCNet在Cityscapes基准测试和ADE20K基准测试中都达到了最先进的性能。 OCNet的代码将在https://github.com/PkuRainBow/OCNet上提供。

URL

https://arxiv.org/abs/1809.00916

PDF

https://arxiv.org/pdf/1809.00916.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot