Paper Reading AI Learner

DVF: Advancing Robust and Accurate Fine-Grained Image Retrieval with Retrieval Guidelines

2024-04-24 09:45:12
Xin Jiang, Hao Tang, Rui Yan, Jinhui Tang, Zechao Li

Abstract

Fine-grained image retrieval (FGIR) is to learn visual representations that distinguish visually similar objects while maintaining generalization. Existing methods propose to generate discriminative features, but rarely consider the particularity of the FGIR task itself. This paper presents a meticulous analysis leading to the proposal of practical guidelines to identify subcategory-specific discrepancies and generate discriminative features to design effective FGIR models. These guidelines include emphasizing the object (G1), highlighting subcategory-specific discrepancies (G2), and employing effective training strategy (G3). Following G1 and G2, we design a novel Dual Visual Filtering mechanism for the plain visual transformer, denoted as DVF, to capture subcategory-specific discrepancies. Specifically, the dual visual filtering mechanism comprises an object-oriented module and a semantic-oriented module. These components serve to magnify objects and identify discriminative regions, respectively. Following G3, we implement a discriminative model training strategy to improve the discriminability and generalization ability of DVF. Extensive analysis and ablation studies confirm the efficacy of our proposed guidelines. Without bells and whistles, the proposed DVF achieves state-of-the-art performance on three widely-used fine-grained datasets in closed-set and open-set settings.

Abstract (translated)

细粒度图像检索(FGIR)是通过学习具有区分力视觉表示,同时保持普适性的视觉表示来研究的问题。现有的方法提出了生成有区分力的特征,但很少考虑FGIR任务的独特性。本文提出了一种 meticulous的分析,导致了制定针对子类别特定差异的实用指南,以设计有效的FGIR模型。这些指南包括强调对象(G1),突出子类别特定差异(G2),并采用有效的训练策略(G3)。遵循G1和G2,我们为平视变换器设计了一个新颖的双视觉过滤机制,表示为DVF,以捕捉子类别特定差异。具体来说,双视觉过滤机制包括一个面向对象的模块和一个面向语义的模块。这些组件分别用于放大对象和识别具有区分性的区域。遵循G3,我们实现了一个用于提高DVF的鉴别率和泛化能力的歧视模型训练策略。广泛的分析和消融实验证实了我们提出的指南的有效性。没有花哨的装饰,DVF在闭合设置和开设置下的三个广泛使用的细粒度数据集上实现了最先进的性能。

URL

https://arxiv.org/abs/2404.15771

PDF

https://arxiv.org/pdf/2404.15771.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot