Abstract
Minimizing the need for pixel-level annotated data for training PET anomaly segmentation networks is crucial, particularly due to time and cost constraints related to expert annotations. Current un-/weakly-supervised anomaly detection methods rely on autoencoder or generative adversarial networks trained only on healthy data, although these are more challenging to train. In this work, we present a weakly supervised and Implicitly guided COuNterfactual diffusion model for Detecting Anomalies in PET images, branded as IgCONDA-PET. The training is conditioned on image class labels (healthy vs. unhealthy) along with implicit guidance to generate counterfactuals for an unhealthy image with anomalies. The counterfactual generation process synthesizes the healthy counterpart for a given unhealthy image, and the difference between the two facilitates the identification of anomaly locations. The code is available at: this https URL
Abstract (translated)
最小化训练PET异常分割网络时需要的高级像素级注释数据至关重要,特别是由于专家注释相关的时间和成本限制。当前的不强监督异常检测方法依赖于仅在健康数据上训练的自编码器或生成对抗网络,尽管这些方法训练起来更具有挑战性。在本文中,我们提出了一个基于弱监督和隐式指导的COuNterfactual扩散模型,用于检测PET图像中的异常,名为IgCONDA-PET。训练取决于图像类标签(健康与不健康)以及针对不健康图像的隐式指导生成反例。反例生成过程生成给定不健康图像的反例,而两者之间的差异有助于异常位置的识别。代码可在此处访问:https://this URL
URL
https://arxiv.org/abs/2405.00239