Abstract
While CNN-based methods have been the cornerstone of medical image segmentation due to their promising performance and robustness, they suffer from limitations in capturing long-range dependencies. Transformer-based approaches are currently prevailing since they enlarge the reception field to model global contextual correlation. To further extract rich representations, some extensions of the U-Net employ multi-scale feature extraction and fusion modules and obtain improved performance. Inspired by this idea, we propose TransCeption for medical image segmentation, a pure transformer-based U-shape network featured by incorporating the inception-like module into the encoder and adopting a contextual bridge for better feature fusion. The design proposed in this work is based on three core principles: (1) The patch merging module in the encoder is redesigned with ResInception Patch Merging (RIPM). Multi-branch transformer (MB transformer) adopts the same number of branches as the outputs of RIPM. Combining the two modules enables the model to capture a multi-scale representation within a single stage. (2) We construct an Intra-stage Feature Fusion (IFF) module following the MB transformer to enhance the aggregation of feature maps from all the branches and particularly focus on the interaction between the different channels of all the scales. (3) In contrast to a bridge that only contains token-wise self-attention, we propose a Dual Transformer Bridge that also includes channel-wise self-attention to exploit correlations between scales at different stages from a dual perspective. Extensive experiments on multi-organ and skin lesion segmentation tasks present the superior performance of TransCeption compared to previous work. The code is publicly available at \url{this https URL}.
Abstract (translated)
尽管卷积神经网络方法因其优异的性能和可靠性成为医学图像分割的基石,但它们在捕捉长距离依赖方面存在限制。基于Transformer的方法目前已经成为大多数扩展U-Net的方法,因为它们扩大接收域以建模全球上下文相关性。为了进一步提取丰富的表示,一些扩展U-Net的方法采用了多尺度特征提取和融合模块,并取得了更好的性能。受这个想法启发,我们提出了TransCeption,一个纯Transformer-based的U-shape网络,其特征包括将贯穿模块集成到编码器中,并采用上下文桥更好地特征融合。该设计基于三个核心原则:(1)编码器中的补丁合并模块重新设计以ResInception补丁合并(RIPM)的输出。多分支Transformer(MBTransformer)采用RIPM输出的相同数量分支。将两个模块结合起来使模型能够在一个阶段捕捉多尺度表示。(2)在 MBTransformer之后,我们建立了一个内部特征融合(IFF)模块,以提高从所有分支中聚合特征映射的能力,并特别注重所有尺度不同通道之间的相互作用。(3)与只包含 token-wise self-attention的桥不同,我们提出了一个双Transformer桥,并包括通道式的self-attention,以从两个角度利用不同阶段尺度之间的相关性。对多器官和皮肤Lesion分割任务进行了广泛的实验,展示了TransCeption相比先前工作表现出更好的性能。代码可在 url{this https URL} 上公开可用。
URL
https://arxiv.org/abs/2301.10847