Abstract
Detection of malignant lesions on mammography images is extremely important for early breast cancer diagnosis. In clinical practice, images are acquired from two different angles, and radiologists can fully utilize information from both views, simultaneously locating the same lesion. However, for automatic detection approaches such information fusion remains a challenge. In this paper, we propose a new model called MAMM-Net, which allows the processing of both mammography views simultaneously by sharing information not only on an object level, as seen in existing works, but also on a feature level. MAMM-Net's key component is the Fusion Layer, based on deformable attention and designed to increase detection precision while keeping high recall. Our experiments show superior performance on the public DDSM dataset compared to the previous state-of-the-art model, while introducing new helpful features such as lesion annotation on pixel-level and classification of lesions malignancy.
Abstract (translated)
在乳腺X光片(mammography)图像中检测恶性病变对于早期乳腺癌诊断至关重要。在临床实践中,图像从两个不同的角度获取,放射科医生可以同时利用这两个角度的信息,同时定位相同的病变。然而,对于自动检测方法,例如信息融合,仍然是一个挑战。在本文中,我们提出了一个名为MAMM-Net的新模型,允许在物体级别共享关于对象的更多信息,同时在特征级别共享关于特征的信息。MAMM-Net的关键组件是融合层,基于可塑性注意力和设计,旨在提高检测精度同时保持高召回率。我们的实验结果表明,与之前的最佳模型相比,在公开的DDSM数据集上具有卓越的性能,同时引入了新的有帮助的功能,如在像素级别对病变进行注释和将病变分类为恶性。
URL
https://arxiv.org/abs/2404.16718