Paper Reading AI Learner

Hard-Aware Point-to-Set Deep Metric for Person Re-identification

2018-07-30 07:41:34
Rui Yu, Zhiyong Dou, Song Bai, Zhaoxiang Zhang, Yongchao Xu, Xiang Bai

Abstract

Person re-identification (re-ID) is a highly challenging task due to large variations of pose, viewpoint, illumination, and occlusion. Deep metric learning provides a satisfactory solution to person re-ID by training a deep network under supervision of metric loss, e.g., triplet loss. However, the performance of deep metric learning is greatly limited by traditional sampling methods. To solve this problem, we propose a Hard-Aware Point-to-Set (HAP2S) loss with a soft hard-mining scheme. Based on the point-to-set triplet loss framework, the HAP2S loss adaptively assigns greater weights to harder samples. Several advantageous properties are observed when compared with other state-of-the-art loss functions: 1) Accuracy: HAP2S loss consistently achieves higher re-ID accuracies than other alternatives on three large-scale benchmark datasets; 2) Robustness: HAP2S loss is more robust to outliers than other losses; 3) Flexibility: HAP2S loss does not rely on a specific weight function, i.e., different instantiations of HAP2S loss are equally effective. 4) Generality: In addition to person re-ID, we apply the proposed method to generic deep metric learning benchmarks including CUB-200-2011 and Cars196, and also achieve state-of-the-art results.

Abstract (translated)

由于姿势,视点,照明和遮挡的大的变化,人员重新识别(重新ID)是非常具有挑战性的任务。深度度量学习通过在度量损失(例如,三元组丢失)的监督下训练深度网络来为人员重新ID提供令人满意的解决方案。然而,传统的采样方法极大地限制了深度量学习的性能。为了解决这个问题,我们提出了一种采用软硬挖掘方案的难以意识的点集(HAP2S)丢失。基于点到点三重态损失框架,HAP2S损失自适应地为较难的样本分配更大的权重。与其他最先进的损失函数相比,观察到几个有利的特性:1)准确性:HAP2S损失始终比三个大规模基准数据集上的其他替代方案实现更高的重新ID精度; 2)稳健性:HAP2S损失对异常值的抵抗力强于其他损失; 3)灵活性:HAP2S损失不依赖于特定的权重函数,即HAP2S损失的不同实例同样有效。 4)通用性:除了人员重新ID之外,我们将所提出的方法应用于通用深度量度学习基准,包括CUB-200-2011和Cars196,并且还实现了最先进的结果。

URL

https://arxiv.org/abs/1807.11206

PDF

https://arxiv.org/pdf/1807.11206.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot