Abstract
Out-of-distribution (OOD) detection is crucial for the safe deployment of neural networks. Existing CLIP-based approaches perform OOD detection by devising novel scoring functions or sophisticated fine-tuning methods. In this work, we propose SeTAR, a novel, training-free OOD detection method that leverages selective low-rank approximation of weight matrices in vision-language and vision-only models. SeTAR enhances OOD detection via post-hoc modification of the model's weight matrices using a simple greedy search algorithm. Based on SeTAR, we further propose SeTAR+FT, a fine-tuning extension optimizing model performance for OOD detection tasks. Extensive evaluations on ImageNet1K and Pascal-VOC benchmarks show SeTAR's superior performance, reducing the false positive rate by up to 18.95% and 36.80% compared to zero-shot and fine-tuning baselines. Ablation studies further validate our approach's effectiveness, robustness, and generalizability across different model backbones. Our work offers a scalable, efficient solution for OOD detection, setting a new state-of-the-art in this area.
Abstract (translated)
离散(OD)检测对于安全部署神经网络至关重要。现有的基于CLIP的方法通过设计新颖的评分函数或精细的微调方法来进行OD检测。在本文中,我们提出了SeTAR,一种新颖的、无需训练的OD检测方法,它利用了视觉语言和视觉模型的稀疏权重矩阵的低秩近似的优势。通过使用简单的贪心搜索算法对模型权重矩阵的后置修改,SeTAR增强了OD检测。基于SeTAR,我们进一步提出了SeTAR+FT,一种用于优化OD检测任务的模型性能的微调扩展方法。在ImageNet1K和Pascal-VOC基准上进行的广泛评估显示,SeTAR的性能优越,将零击和微调基线的假阳性率分别降低了18.95%和36.80%。消融研究进一步验证了我们的方法的有效性、稳健性和泛化性,证明了其对不同模型骨干的适用性。我们的工作为OD检测提供了一种可扩展、高效的解决方案,为该领域树立了新的技术水平。
URL
https://arxiv.org/abs/2406.12629