Abstract
This paper introduces MAD-MIL, a Multi-head Attention-based Deep Multiple Instance Learning model, designed for weakly supervised Whole Slide Images (WSIs) classification in digital pathology. Inspired by the multi-head attention mechanism of the Transformer, MAD-MIL simplifies model complexity while achieving competitive results against advanced models like CLAM and DS-MIL. Evaluated on the MNIST-BAGS and public datasets, including TUPAC16, TCGA BRCA, TCGA LUNG, and TCGA KIDNEY, MAD-MIL consistently outperforms ABMIL. This demonstrates enhanced information diversity, interpretability, and efficiency in slide representation. The model's effectiveness, coupled with fewer trainable parameters and lower computational complexity makes it a promising solution for automated pathology workflows. Our code is available at this https URL.
Abstract (translated)
本文介绍了MAD-MIL,一种基于多头注意力机制的深度多实例学习模型,用于数字病理学中弱监督整张幻灯片(WSIs)分类。受到Transformer中多头注意力的启发,MAD-MIL通过简化模型复杂度同时对抗像CLAM和DS-MIL这样的先进模型。在MNIST-BAGS和公开数据集(包括TUPAC16、TCGA BRCA、TCGA LUNG和TCGA KIDNEY)上进行评估,MAD-MIL始终优于ABMIL。这证明了在幻灯片表示中信息多样性、可解释性和效率的增强。模型的有效性相结合较少的训练参数和较低的计算复杂性使其成为自动病理学工作流程的有前景的解决方案。我们的代码可在此处访问:https://www.xxxxxx。
URL
https://arxiv.org/abs/2404.05362