Abstract
We propose Masked-Attention Transformers for Surgical Instrument Segmentation (MATIS), a two-stage, fully transformer-based method that leverages modern pixel-wise attention mechanisms for instrument segmentation. MATIS exploits the instance-level nature of the task by employing a masked attention module that generates and classifies a set of fine instrument region proposals. Our method incorporates long-term video-level information through video transformers to improve temporal consistency and enhance mask classification. We validate our approach in the two standard public benchmarks, Endovis 2017 and Endovis 2018. Our experiments demonstrate that MATIS' per-frame baseline outperforms previous state-of-the-art methods and that including our temporal consistency module boosts our model's performance further.
Abstract (translated)
我们提出了使用掩码注意力Transformers进行手术器械分割(MATIS)方法,这是一种两阶段的完全Transformer-based方法,利用现代像素级注意力机制进行器械分割。MATIS使用掩码注意力模块生成和分类一组精细器械区域提议。我们的方法通过视频Transformers引入长期视频级信息,以提高时间一致性并增强掩码分类。我们在两个标准公共基准(Endovis 2017和Endovis 2018)上验证我们的方法。我们的实验结果表明,MATIS的每帧基准比先前的最先进的方法更好,而且包括我们的时间一致性模块可以提高我们的模型性能。
URL
https://arxiv.org/abs/2303.09514