Abstract
Recently, deep learning enabled the accurate segmentation of various diseases in medical imaging. These performances, however, typically demand large amounts of manual voxel annotations. This tedious process for volumetric data becomes more complex when not all required information is available in a single imaging domain as is the case for PET/CT data. We propose a multimodal interactive segmentation framework that mitigates these issues by combining anatomical and physiological cues from PET/CT data. Our framework utilizes the geodesic distance transform to represent the user annotations and we implement a novel ellipsoid-based user simulation scheme during training. We further propose two annotation interfaces and conduct a user study to estimate their usability. We evaluated our model on the in-domain validation dataset and an unseen PET/CT dataset. We make our code publicly available: this https URL.
Abstract (translated)
最近,深度学习在医学影像中实现了准确的分割,但这些性能通常需要大量的手动立方体注释。当所需的所有信息在一个单一的成像领域而不是PET/CT数据中可用时,这些体积数据分割的过程变得更加复杂。我们提出了一种多通道交互分割框架,通过结合PET/CT数据的解剖和生理 cues来缓解这些问题。我们的框架使用Geodesic距离变换来表示用户注释,并在训练期间实现了一种新的 ellipsoid-based的用户模拟 scheme。我们还提出了两个注释接口并进行用户研究,以估算其可用性。我们评估了我们的模型在内部验证数据和 unseen的PET/CT数据集上。我们将我们的代码公开如下链接:
URL
https://arxiv.org/abs/2301.09914