Abstract
Purpose: A profound education of novice surgeons is crucial to ensure that surgical interventions are effective and safe. One important aspect is the teaching of technical skills for minimally invasive or robot-assisted procedures. This includes the objective and preferably automatic assessment of surgical skill. Recent studies presented good results for automatic, objective skill evaluation by collecting and analyzing motion data such as trajectories of surgical instruments. However, obtaining the motion data generally requires additional equipment for instrument tracking or the availability of a robotic surgery system to capture kinematic data. In contrast, we investigate a method for automatic, objective skill assessment that requires video data only. This has the advantage that video can be collected effortlessly during minimally invasive and robot-assisted training scenarios. Methods: Our method builds on recent advances in deep learning-based video classification. Specifically, we propose to use an inflated 3D ConvNet to classify snippets of optical flow extracted from surgical video. The network is extended into a Temporal Segment Network during training. Results: On the publicly available JIGSAWS dataset, our approach achieves high skill classification accuracies ranging from 95.1% to 100.0%. Conclusions: Our results demonstrate the feasibility of deep learning-based assessment of technical skill from surgical video. The 3D ConvNet is able to learn meaningful patterns directly from the data, alleviating the need for manual feature engineering. Further evaluation will require more annotated data for training and testing.
Abstract (translated)
目的:对新手外科医生进行深入的教育对于确保手术干预的有效性和安全性至关重要。一个重要方面是微创或机器人辅助程序的技术技能教学。这包括客观的,最好是手术技能的自动评估。近年来的研究通过对手术器械运动轨迹等运动数据的采集和分析,为手术器械运动轨迹的自动客观评价提供了良好的结果。然而,获取运动数据通常需要额外的仪器跟踪设备或机器人手术系统来获取运动数据。相反,我们研究了一种只需要视频数据的自动、客观的技能评估方法。这有一个优点,即在微创和机器人辅助的训练场景中,可以轻松地采集视频。
URL
https://arxiv.org/abs/1903.02306