Paper Reading AI Learner

Enhancing Few-Shot Out-of-Distribution Detection via the Refinement of Foreground and Background

2026-01-21 15:12:11
Tianyu Li, Songyue Cai, Zongqian Wu, Ping Hu, Xiaofeng Zhu

Abstract

CLIP-based foreground-background (FG-BG) decomposition methods have demonstrated remarkable effectiveness in improving few-shot out-of-distribution (OOD) detection performance. However, existing approaches still suffer from several limitations. For background regions obtained from decomposition, existing methods adopt a uniform suppression strategy for all patches, overlooking the varying contributions of different patches to the prediction. For foreground regions, existing methods fail to adequately consider that some local patches may exhibit appearance or semantic similarity to other classes, which may mislead the training process. To address these issues, we propose a new plug-and-play framework. This framework consists of three core components: (1) a Foreground-Background Decomposition module, which follows previous FG-BG methods to separate an image into foreground and background regions; (2) an Adaptive Background Suppression module, which adaptively weights patch classification entropy; and (3) a Confusable Foreground Rectification module, which identifies and rectifies confusable foreground patches. Extensive experimental results demonstrate that the proposed plug-and-play framework significantly improves the performance of existing FG-BG decomposition methods. Code is available at: this https URL.

Abstract (translated)

基于CLIP的前景背景(FG-BG)分解方法在改善少量样本条件下分布外(OOD)检测性能方面已经表现出显著的效果。然而,现有方法仍然存在一些局限性。对于从分解中获得的背景区域,现有的方法采用了一种针对所有补丁的统一抑制策略,忽略了不同补丁对预测的不同贡献。对于前景区域,现有方法未能充分考虑某些局部补丁可能在外观或语义上与其它类别相似的情况,这可能会误导训练过程。 为了解决这些问题,我们提出了一种新的即插即用框架。该框架由三个核心组件组成:(1)前景背景分解模块,它遵循之前的FG-BG方法将图像分离成前景和背景区域;(2)自适应背景抑制模块,根据补丁分类熵进行动态加权;以及(3)混淆前景修正模块,用于识别并纠正可能引起误判的前景补丁。 广泛的实验结果表明,所提出的即插即用框架显著提高了现有FG-BG分解方法的表现。代码可在以下网址获取:[此链接](this https URL)。

URL

https://arxiv.org/abs/2601.15065

PDF

https://arxiv.org/pdf/2601.15065.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot