Abstract
Vision Transformers are at the heart of the current surge of interest in foundation models for histopathology. They process images by breaking them into smaller patches following a regular grid, regardless of their content. Yet, not all parts of an image are equally relevant for its understanding. This is particularly true in computational pathology where background is completely non-informative and may introduce artefacts that could mislead predictions. To address this issue, we propose a novel method that explicitly masks background in Vision Transformers' attention mechanism. This ensures tokens corresponding to background patches do not contribute to the final image representation, thereby improving model robustness and interpretability. We validate our approach using prostate cancer grading from whole-slide images as a case study. Our results demonstrate that it achieves comparable performance with plain self-attention while providing more accurate and clinically meaningful attention heatmaps.
Abstract (translated)
视觉 transformers 是当前病理学模型关注热潮的核心。它们通过在 regular 网格的指导下对图像进行分割来处理图像,无论其内容如何。然而,图像中并不是所有的部分都对其理解至关重要。在计算病理学中,背景是完全非信息性的,可能会引入伪影,导致误预测。为了解决这个问题,我们提出了一种新的方法,在视觉 transformers 的注意力机制中明确遮盖背景。这样可以确保与背景补丁对应的标记词不参与最终图像表示,从而提高模型的稳健性和可解释性。我们通过前列腺癌分级来自整体幻灯片作为病例研究来验证我们的方法。我们的结果表明,与仅使用自注意力机制相比,它实现了相当不错的性能,同时提供了更准确和具有临床意义的关注热图。
URL
https://arxiv.org/abs/2404.18152