Abstract
Fine-grained image retrieval (FGIR) is to learn visual representations that distinguish visually similar objects while maintaining generalization. Existing methods propose to generate discriminative features, but rarely consider the particularity of the FGIR task itself. This paper presents a meticulous analysis leading to the proposal of practical guidelines to identify subcategory-specific discrepancies and generate discriminative features to design effective FGIR models. These guidelines include emphasizing the object (G1), highlighting subcategory-specific discrepancies (G2), and employing effective training strategy (G3). Following G1 and G2, we design a novel Dual Visual Filtering mechanism for the plain visual transformer, denoted as DVF, to capture subcategory-specific discrepancies. Specifically, the dual visual filtering mechanism comprises an object-oriented module and a semantic-oriented module. These components serve to magnify objects and identify discriminative regions, respectively. Following G3, we implement a discriminative model training strategy to improve the discriminability and generalization ability of DVF. Extensive analysis and ablation studies confirm the efficacy of our proposed guidelines. Without bells and whistles, the proposed DVF achieves state-of-the-art performance on three widely-used fine-grained datasets in closed-set and open-set settings.
Abstract (translated)
细粒度图像检索(FGIR)是通过学习具有区分力视觉表示,同时保持普适性的视觉表示来研究的问题。现有的方法提出了生成有区分力的特征,但很少考虑FGIR任务的独特性。本文提出了一种 meticulous的分析,导致了制定针对子类别特定差异的实用指南,以设计有效的FGIR模型。这些指南包括强调对象(G1),突出子类别特定差异(G2),并采用有效的训练策略(G3)。遵循G1和G2,我们为平视变换器设计了一个新颖的双视觉过滤机制,表示为DVF,以捕捉子类别特定差异。具体来说,双视觉过滤机制包括一个面向对象的模块和一个面向语义的模块。这些组件分别用于放大对象和识别具有区分性的区域。遵循G3,我们实现了一个用于提高DVF的鉴别率和泛化能力的歧视模型训练策略。广泛的分析和消融实验证实了我们提出的指南的有效性。没有花哨的装饰,DVF在闭合设置和开设置下的三个广泛使用的细粒度数据集上实现了最先进的性能。
URL
https://arxiv.org/abs/2404.15771