Abstract
Deep metric learning seeks to define an embedding where semantically similar images are embedded to nearby locations, and semantically dissimilar images are embedded to distant locations. Substantial work has focused on loss functions and strategies to learn these embeddings by pushing images from the same class as close together in the embedding space as possible. In this paper, we propose an alternative, loosened embedding strategy that requires the embedding function only map each training image to the most similar examples from the same class, an approach we call "Easy Positive" mining. We provide a collection of experiments and visualizations that highlight that this Easy Positive mining leads to embeddings that are more flexible and generalize better to new unseen data. This simple mining strategy yields recall performance that exceeds state of the art approaches (including those with complicated loss functions and ensemble methods) on image retrieval datasets including CUB, Stanford Online Products, In-Shop Clothes and Hotels-50K.
Abstract (translated)
深度度量学习试图定义一个嵌入,其中语义相似的图像嵌入到附近的位置,语义不同的图像嵌入到远处的位置。大量的工作集中在损失函数和策略上,通过尽可能地将来自同一类的图像推到嵌入空间中,来学习这些嵌入。在本文中,我们提出了一种替代的、松散的嵌入策略,该策略只需要嵌入函数将每个训练图像映射到同一类中最相似的示例,我们称之为“简单正”挖掘。我们提供了一个实验和可视化的集合,强调了这种简单的正挖掘导致了更灵活的嵌入,并且更好地概括为新的未知数据。这种简单的挖掘策略在图像检索数据集(包括CUB、斯坦福在线产品、店内服装和酒店-50K)上产生的召回性能超过了最先进的方法(包括那些具有复杂损失函数和集成方法的方法)。
URL
https://arxiv.org/abs/1904.04370