Abstract
Ensuring reliability is paramount in deep learning, particularly within the domain of medical imaging, where diagnostic decisions often hinge on model outputs. The capacity to separate out-of-distribution (OOD) samples has proven to be a valuable indicator of a model's reliability in research. In medical imaging, this is especially critical, as identifying OOD inputs can help flag potential anomalies that might otherwise go undetected. While many OOD detection methods rely on feature or logit space representations, recent works suggest these approaches may not fully capture OOD diversity. To address this, we propose a novel OOD scoring mechanism, called NERO, that leverages neuron-level relevance at the feature layer. Specifically, we cluster neuron-level relevance for each in-distribution (ID) class to form representative centroids and introduce a relevance distance metric to quantify a new sample's deviation from these centroids, enhancing OOD separability. Additionally, we refine performance by incorporating scaled relevance in the bias term and combining feature norms. Our framework also enables explainable OOD detection. We validate its effectiveness across multiple deep learning architectures on the gastrointestinal imaging benchmarks Kvasir and GastroVision, achieving improvements over state-of-the-art OOD detection methods.
Abstract (translated)
确保可靠性在深度学习中至关重要,尤其是在医学影像领域,诊断决策往往依赖于模型的输出结果。区分出“非分布数据”(OOD)样本的能力已被证明是衡量模型可靠性的宝贵指标。在医学影像学中,这一点尤为重要,因为识别OOD输入可以帮助标记潜在异常情况,这些异常情况可能被忽略。 虽然许多OOD检测方法依赖于特征空间或对数几率空间的表示形式,但最近的研究表明,这些方法可能无法完全捕捉OOD数据的多样性。为了解决这个问题,我们提出了一种新的OOD评分机制,称为NERO(Neuron-level Relevance for Out-of-Distribution),该机制利用了在特征层面上神经元级别的相关性。具体而言,我们将每个“内分布”(ID)类别的神经元级别相关性进行聚类以形成代表性中心,并引入相关距离度量来量化新样本相对于这些中心的偏离程度,从而提高OOD数据的可区分性。 此外,我们通过在偏差项中加入缩放后的相关性和组合特征范数来改进性能。我们的框架还支持解释性的OOD检测。我们在胃肠道影像基准数据集Kvasir和GastroVision上验证了其有效性,并且在最先进的OOD检测方法上取得了改进。
URL
https://arxiv.org/abs/2506.15404