Abstract
Deep metric learning (DML) learns the mapping, which maps into embedding space in which similar data is near and dissimilar data is far. Most DML frameworks apply L2 normalization to feature vectors, and these feature vectors are non-sparse. In this paper, we propose to apply L1 regularization loss to feature vectors. Proposed regularization emphasizes important features and restraints unimportant features on L2 normalized features. L1 regularization can combine with general DML losses because L1 regularization only regularizes feature vectors. In this paper, we finally propose SparseSoftTriple loss, which is a combination of SoftTriple loss and L1 regularization. We demonstrate the effectiveness of the proposed SparseSoftTriple loss on some data sets for image retrieval tasks and fine-grained images.
Abstract (translated)
URL
https://arxiv.org/abs/2110.03997