Abstract
CLIP-based foreground-background (FG-BG) decomposition methods have demonstrated remarkable effectiveness in improving few-shot out-of-distribution (OOD) detection performance. However, existing approaches still suffer from several limitations. For background regions obtained from decomposition, existing methods adopt a uniform suppression strategy for all patches, overlooking the varying contributions of different patches to the prediction. For foreground regions, existing methods fail to adequately consider that some local patches may exhibit appearance or semantic similarity to other classes, which may mislead the training process. To address these issues, we propose a new plug-and-play framework. This framework consists of three core components: (1) a Foreground-Background Decomposition module, which follows previous FG-BG methods to separate an image into foreground and background regions; (2) an Adaptive Background Suppression module, which adaptively weights patch classification entropy; and (3) a Confusable Foreground Rectification module, which identifies and rectifies confusable foreground patches. Extensive experimental results demonstrate that the proposed plug-and-play framework significantly improves the performance of existing FG-BG decomposition methods. Code is available at: this https URL.
Abstract (translated)
基于CLIP的前景背景(FG-BG)分解方法在改善少量样本条件下分布外(OOD)检测性能方面已经表现出显著的效果。然而,现有方法仍然存在一些局限性。对于从分解中获得的背景区域,现有的方法采用了一种针对所有补丁的统一抑制策略,忽略了不同补丁对预测的不同贡献。对于前景区域,现有方法未能充分考虑某些局部补丁可能在外观或语义上与其它类别相似的情况,这可能会误导训练过程。 为了解决这些问题,我们提出了一种新的即插即用框架。该框架由三个核心组件组成:(1)前景背景分解模块,它遵循之前的FG-BG方法将图像分离成前景和背景区域;(2)自适应背景抑制模块,根据补丁分类熵进行动态加权;以及(3)混淆前景修正模块,用于识别并纠正可能引起误判的前景补丁。 广泛的实验结果表明,所提出的即插即用框架显著提高了现有FG-BG分解方法的表现。代码可在以下网址获取:[此链接](this https URL)。
URL
https://arxiv.org/abs/2601.15065