Abstract
Universal anomaly detection still remains a challenging prob- lem in machine learning and medical image analysis. It is possible to learn an expected distribution from a single class of normative samples, e.g., through epistemic uncertainty estimates, auto-encoding models, or from synthetic anomalies in a self-supervised way. The performance of self-supervised anomaly detection approaches is still inferior compared to methods that use examples from known unknown classes to shape the decision boundary. However, outlier exposure methods often do not identify unknown unknowns. Here we discuss an improved self-supervised single-class training strategy that supports the approximation of proba- bilistic inference with loosen feature locality constraints. We show that up-scaling of gradients with histogram-equalised images is beneficial for recently proposed self-supervision tasks. Our method is integrated into several out-of-distribution (OOD) detection models and we show evi- dence that our method outperforms the state-of-the-art on various bench- mark datasets. Source code will be publicly available by the time of the conference.
Abstract (translated)
普遍异常检测仍然是机器学习和医学图像分析中一个挑战性的问题。从一类校准样本中学习期望分布,例如通过知识不确定性估计、自动编码模型或通过自监督的方式来合成异常样本。自监督异常检测方法的性能仍然比使用已知未知类样本来 shaping决策边界的方法差。然而,异常暴露方法通常无法识别未知未知例。在此我们讨论了改进的自监督单个类训练策略,支持放宽特征局部限制的概率推断推断。我们表明,对梯度图像进行直方图均衡化可以提高最近提出的自监督任务的性能。我们的方法被集成到多个非分布检测模型中,并证据表明,我们的方法在多个基准数据集上比当前最先进的方法表现更好。源代码将在会议结束后公开可用。
URL
https://arxiv.org/abs/2303.13227