Abstract
Surgical action localization is a challenging computer vision problem. While it has promising applications including automated training of surgery procedures, surgical workflow optimization, etc., appropriate model design is pivotal to accomplishing this task. Moreover, the lack of suitable medical datasets adds an additional layer of complexity. To that effect, we introduce a new complex dataset of nephrectomy surgeries called UroSlice. To perform the action localization from these videos, we propose a novel model termed as `ViTALS' (Vision Transformer for Action Localization in Surgical Nephrectomy). Our model incorporates hierarchical dilated temporal convolution layers and inter-layer residual connections to capture the temporal correlations at finer as well as coarser granularities. The proposed approach achieves state-of-the-art performance on Cholec80 and UroSlice datasets (89.8% and 66.1% accuracy, respectively), validating its effectiveness.
Abstract (translated)
手术动作定位是一个具有挑战性的计算机视觉问题。虽然它具有包括自动手术程序训练、手术工作流程优化等的有前途的应用,但适当的模型设计是完成此任务的关键。此外,缺乏合适的医疗数据集还增加了一层复杂性。因此,我们引入了一个名为“UroSlice”的新颖的肾切除手术数据集。为了从这些视频中进行动作定位,我们提出了一个名为“ViTALS”的新模型(视觉Transformer用于手术动作定位)。我们的模型包括层次化的扩张卷积层和跨层残差连接,以捕捉细粒度和粗粒度的时间关联。所提出的技术在Cholec80和UroSlice数据集上的表现达到最先进水平(89.8%和66.1%的准确率,分别),验证了其有效性和可靠性。
URL
https://arxiv.org/abs/2405.02571