Abstract
Representation learning from Gigapixel Whole Slide Images (WSI) poses a significant challenge in computational pathology due to the complicated nature of tissue structures and the scarcity of labeled data. Multi-instance learning methods have addressed this challenge, leveraging image patches to classify slides utilizing pretrained models using Self-Supervised Learning (SSL) approaches. The performance of both SSL and MIL methods relies on the architecture of the feature encoder. This paper proposes leveraging the Vision Mamba (Vim) architecture, inspired by state space models, within the DINO framework for representation learning in computational pathology. We evaluate the performance of Vim against Vision Transformers (ViT) on the Camelyon16 dataset for both patch-level and slide-level classification. Our findings highlight Vim's enhanced performance compared to ViT, particularly at smaller scales, where Vim achieves an 8.21 increase in ROC AUC for models of similar size. An explainability analysis further highlights Vim's capabilities, which reveals that Vim uniquely emulates the pathologist workflow-unlike ViT. This alignment with human expert analysis highlights Vim's potential in practical diagnostic settings and contributes significantly to developing effective representation-learning algorithms in computational pathology. We release the codes and pretrained weights at \url{this https URL}.
Abstract (translated)
从Gigapixel Whole Slide Images(WSI)中进行表示学习在计算病理学中是一个具有重大挑战性的问题,因为组织结构的复杂性和标注数据的稀疏性。多实例学习方法已经解决了这个挑战,通过利用图像补丁对预训练模型进行分类,利用自监督学习(SSL)方法实现。 SSL和MIL方法的表现都依赖于特征编码器的架构。本文提出利用Vision Mamba(Vim)架构,受到状态空间模型的启发,在DINO框架中进行计算病理学中代表学习的建议。我们在Camelyon16数据集上评估Vim与Vision Transformers(ViT)的性能,包括补丁水平和滑动级别分类。我们的研究结果表明,与ViT相比,Vim在较小规模上表现出色,特别是在较小规模上,Vim的ROC AUC模型大小增加了8.21。可解释性分析进一步强调了Vim的功能,揭示了Vim独特地模拟了病理学家工作流程,类似于ViT。这种与人类专家分析的 alignment 突出了Vim在实际诊断场景中的潜在能力,并显著地促进了开发有效的计算病理学中的表示学习算法。我们发布了代码和预训练权重在this链接处。
URL
https://arxiv.org/abs/2404.13222