Paper Reading AI Learner

Unsupervised Person Re-identification by Soft Multilabel Learning

2019-03-15 02:10:57
Hong-Xing Yu, Wei-Shi Zheng, Ancong Wu, Xiaowei Guo, Shaogang Gong, Jian-Huang Lai

Abstract

Although unsupervised person re-identification (RE-ID) has drawn increasing research attentions due to its potential to address the scalability problem of supervised RE-ID models, it is very challenging to learn discriminative information in the absence of pairwise labels across disjoint camera views. To overcome this problem, we propose a deep model for the soft multilabel learning for unsupervised RE-ID. The idea is to learn a soft multilabel (real-valued label likelihood vector) for each unlabeled person by comparing the unlabeled person with a set of known reference persons from an auxiliary domain. We propose the soft multilabel-guided hard negative mining to learn a discriminative embedding for the unlabeled target domain by exploring the similarity consistency of the visual features and the soft multilabels of unlabeled target pairs. Since most target pairs are cross-view pairs, we develop the cross-view consistent soft multilabel learning to achieve the learning goal that the soft multilabels are consistently good across different camera views. To enable effecient soft multilabel learning, we introduce the reference agent learning to represent each reference person by a reference agent in a joint embedding. We evaluate our unified deep model on Market-1501 and DukeMTMC-reID. Our model outperforms the state-of-theart unsupervised RE-ID methods by clear margins. Code is available at https://github.com/KovenYu/MAR.

Abstract (translated)

无监督人再识别(RE-ID)由于具有解决受监督RE-ID模型可伸缩性问题的潜力而引起了越来越多的研究关注,但在没有成对标签的情况下,在不相交的摄像机视图中学习识别信息是非常困难的。为了解决这一问题,我们提出了一个无监督RE-ID的软多标签学习的深层模型,其思想是通过将无标签人与辅助域中的一组已知参考人进行比较,为每个无标签人学习软多标签(实值标签似然向量)。我们提出了软多标签引导的硬负挖掘,通过探索视觉特征与非标签目标对软多标签的相似度一致性,来学习非标签目标域的识别嵌入。由于大多数目标对都是交叉视点对,因此我们开发了交叉视点一致性软多标签学习,以实现软多标签在不同摄像机视点上一致性好的学习目标。为了实现高效的软多标签学习,我们引入了参考代理学习,在联合嵌入中通过参考代理来表示每个参考人。我们在Market-1501和Dukemtmc Reid上评估了我们的统一深度模型。我们的模型在净利润方面优于无监督RE-ID方法的状态。代码可从https://github.com/kovenyu/mar获取。

URL

https://arxiv.org/abs/1903.06325

PDF

https://arxiv.org/pdf/1903.06325.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot