Abstract
Radiologists highly desire fully automated versatile AI for medical imaging interpretation. However, the lack of extensively annotated large-scale multi-disease datasets has hindered the achievement of this goal. In this paper, we explore the feasibility of leveraging language as a naturally high-quality supervision for chest CT imaging. In light of the limited availability of image-report pairs, we bootstrap the understanding of 3D chest CT images by distilling chest-related diagnostic knowledge from an extensively pre-trained 2D X-ray expert model. Specifically, we propose a language-guided retrieval method to match each 3D CT image with its semantically closest 2D X-ray image, and perform pair-wise and semantic relation knowledge distillation. Subsequently, we use contrastive learning to align images and reports within the same patient while distinguishing them from the other patients. However, the challenge arises when patients have similar semantic diagnoses, such as healthy patients, potentially confusing if treated as negatives. We introduce a robust contrastive learning that identifies and corrects these false negatives. We train our model with over 12,000 pairs of chest CT images and radiology reports. Extensive experiments across multiple scenarios, including zero-shot learning, report generation, and fine-tuning processes, demonstrate the model's feasibility in interpreting chest CT images.
Abstract (translated)
放射科医生非常渴望完全自动化的多用途AI医疗影像诊断。然而,缺乏大量注释的大型多病种数据集限制了这一目标的实现。在本文中,我们探讨了将语言作为自然高质量监督来利用在胸部CT成像中的可行性。鉴于图像报告对有限可用性,我们通过从广泛预训练的2D X射线专家模型中提取与胸部相关的诊断知识,对3D胸部CT图像进行建模。具体来说,我们提出了一种语言引导的检索方法,将每个3D CT图像与它们的语义最近2D X射线图像匹配,并进行成对和语义关系知识蒸馏。随后,我们使用对比学习在同一患者内对图像和报告进行对齐,同时将它们与其他患者区分开来。然而,当患者具有类似的语义诊断时,如健康患者,就可能产生混淆。我们引入了一种鲁棒的对比学习来识别和纠正这些假阴性。我们用超过12,000对胸部CT图像和放射科报告进行训练。在多个场景中进行广泛的实验,包括零散学习、报告生成和微调过程,证明了模型在解释胸部CT图像方面的可行性。
URL
https://arxiv.org/abs/2404.04936