Abstract
In radiologists' routine work, one major task is to read a medical image, e.g., a CT scan, find significant lesions, and describe them in the radiology report. In this paper, we study the lesion description or annotation problem. Given a lesion image, our aim is to predict a comprehensive set of relevant labels, such as the lesion's body part, type, and attributes, which may assist downstream fine-grained diagnosis. To address this task, we first design a deep learning module to extract relevant semantic labels from the radiology reports associated with the lesion images. With the images and text-mined labels, we propose a lesion annotation network (LesaNet) based on a multilabel convolutional neural network (CNN) to learn all labels holistically. Hierarchical relations and mutually exclusive relations between the labels are leveraged to improve the label prediction accuracy. The relations are utilized in a label expansion strategy and a relational hard example mining algorithm. We also attach a simple score propagation layer on LesaNet to enhance recall and explore implicit relation between labels. Multilabel metric learning is combined with classification to enable interpretable prediction. We evaluated LesaNet on the public DeepLesion dataset, which contains over 32K diverse lesion images. Experiments show that LesaNet can precisely annotate the lesions using an ontology of 171 fine-grained labels with an average AUC of 0.9344.
Abstract (translated)
在放射科医生的日常工作中,一个主要任务是读取医学图像,例如CT扫描,发现重大病变,并在放射学报告中描述它们。本文主要研究病变描述或注释问题。对于病变图像,我们的目的是预测一套全面的相关标签,如病变的身体部位、类型和属性,这可能有助于下游的细粒度诊断。为了解决这个问题,我们首先设计了一个深度学习模块,从与病变图像相关的放射报告中提取相关的语义标签。在图像和文本挖掘标签的基础上,我们提出了一种基于多标签卷积神经网络(CNN)的损伤注释网络(lesanet),对所有标签进行整体学习。利用标签之间的层次关系和互斥关系,提高了标签预测的准确性。该关系被用于标签扩展策略和关系硬示例挖掘算法中。我们还附加了一个简单的分数传播层,以加强回忆和探索标签之间的隐含关系。多标签度量学习与分类相结合,实现可解释的预测。我们在包含超过32K不同病变图像的公共Deeplesion数据集上评估了Lesanet。实验表明,莱萨内特可以精确地用171个细颗粒标签的本体论来解释病变,平均AUC为0.9344。
URL
https://arxiv.org/abs/1904.04661