Abstract
Semi-supervised medical image segmentation is a crucial technique for alleviating the high cost of data annotation. When labeled data is limited, textual information can provide additional context to enhance visual semantic understanding. However, research exploring the use of textual data to enhance visual semantic embeddings in 3D medical imaging tasks remains scarce. In this paper, we propose a novel text-driven multiplanar visual interaction framework for semi-supervised medical image segmentation (termed Text-SemiSeg), which consists of three main modules: Text-enhanced Multiplanar Representation (TMR), Category-aware Semantic Alignment (CSA), and Dynamic Cognitive Augmentation (DCA). Specifically, TMR facilitates text-visual interaction through planar mapping, thereby enhancing the category awareness of visual features. CSA performs cross-modal semantic alignment between the text features with introduced learnable variables and the intermediate layer of visual features. DCA reduces the distribution discrepancy between labeled and unlabeled data through their interaction, thus improving the model's robustness. Finally, experiments on three public datasets demonstrate that our model effectively enhances visual features with textual information and outperforms other methods. Our code is available at this https URL.
Abstract (translated)
半监督医学图像分割是一种减轻数据标注高成本的关键技术。当标记的数据有限时,文本信息可以提供额外的背景知识以增强视觉语义理解。然而,在三维医学成像任务中利用文本数据来提升视觉语义嵌入的研究仍然较少。在本文中,我们提出了一种新颖的基于文本驱动的多平面视图交互框架用于半监督医学图像分割(称为Text-SemiSeg),该框架由三个主要模块组成:增强型多平面表示(TMR)、类别感知语义对齐(CSA)和动态认知增强(DCA)。具体而言,TMR通过平面映射促进文本-视觉互动,从而增强了视觉特征的类别意识。CSA在引入了可学习变量的文本特征与视觉特征中间层之间执行跨模态语义对齐。DCA则通过标签数据和无标签数据之间的相互作用减少它们分布上的差异,从而提高模型的鲁棒性。最后,在三个公开的数据集上进行的实验表明,我们的模型有效地利用了文本信息来增强视觉特性,并且优于其他方法。我们的代码可在提供的链接中获取。
URL
https://arxiv.org/abs/2507.12382