Paper Reading AI Learner

Macro-Micro Adversarial Network for Human Parsing

2018-07-22 08:49:49
Yawei Luo, Zhedong Zheng, Liang Zheng, Tao Guan, Junqing Yu, Yi Yang

Abstract

In human parsing, the pixel-wise classification loss has drawbacks in its low-level local inconsistency and high-level semantic inconsistency. The introduction of the adversarial network tackles the two problems using a single discriminator. However, the two types of parsing inconsistency are generated by distinct mechanisms, so it is difficult for a single discriminator to solve them both. To address the two kinds of inconsistencies, this paper proposes the Macro-Micro Adversarial Net (MMAN). It has two discriminators. One discriminator, Macro D, acts on the low-resolution label map and penalizes semantic inconsistency, e.g., misplaced body parts. The other discriminator, Micro D, focuses on multiple patches of the high-resolution label map to address the local inconsistency, e.g., blur and hole. Compared with traditional adversarial networks, MMAN not only enforces local and semantic consistency explicitly, but also avoids the poor convergence problem of adversarial networks when handling high resolution images. In our experiment, we validate that the two discriminators are complementary to each other in improving the human parsing accuracy. The proposed framework is capable of producing competitive parsing performance compared with the state-of-the-art methods, i.e., mIoU=46.81% and 59.91% on LIP and PASCAL-Person-Part, respectively. On a relatively small dataset PPSS, our pre-trained model demonstrates impressive generalization ability. The code is publicly available at https://github.com/RoyalVane/MMAN.

Abstract (translated)

在人类解析中,像素级分类丢失在其低级本地不一致性和高级语义不一致性方面具有缺点。对抗性网络的引入使用单个鉴别器解决了这两个问题。然而,两种类型的解析不一致是由不同的机制产生的,因此单个鉴别器很难解决它们。为解决这两种不一致问题,本文提出了宏 - 微对抗网(MMAN)。它有两个鉴别器。一个鉴别器Macro D作用于低分辨率标签图并且惩罚语义不一致性,例如错位的身体部位。另一个鉴别器Micro D专注于高分辨率标签图的多个片,以解决局部不一致性,例如模糊和洞。与传统的对抗性网络相比,MMAN不仅明确地强制实现了局部和语义一致性,而且避免了处理高分辨率图像时对抗性网络的收敛性差的问题。在我们的实验中,我们验证了两个鉴别器在提高人类解析准确性方面相互补充。与现有技术方法相比,所提出的框架能够产生有竞争力的解析性能,即分别在LIP和PASCAL-Person-Part上的mIoU = 46.81%和59.91%。在相对较小的数据集PPSS上,我们的预训练模型展示了令人印象深刻的泛化能力。该代码可在https://github.com/RoyalVane/MMAN上公开获取。

URL

https://arxiv.org/abs/1807.08260

PDF

https://arxiv.org/pdf/1807.08260.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot