Abstract
In this paper, we address the open research problem of surgical gesture recognition using motion cues from video data only. We adapt Optical flow ConvNets initially proposed by Simonyan et al.. While Simonyan uses both RGB frames and dense optical flow, we use only dense optical flow representations as input to emphasize the role of motion in surgical gesture recognition, and present it as a robust alternative to kinematic data. We also overcome one of the limitations of Optical flow ConvNets by initializing our model with cross modality pre-training. A large number of promising studies that address surgical gesture recognition highly rely on kinematic data which requires additional recording devices. To our knowledge, this is the first paper that addresses surgical gesture recognition using dense optical flow information only. We achieve competitive results on JIGSAWS dataset, moreover, our model achieves more robust results with less standard deviation, which suggests optical flow information can be used as an alternative to kinematic data for the recognition of surgical gestures.
Abstract (translated)
本文研究了仅利用视频数据的运动提示进行手术手势识别的开放性研究问题。我们采用Simonyan等人最初提出的光流转换器。虽然Simonyan同时使用了RGB帧和密集光流,但我们仅使用密集光流表示作为输入来强调运动在手术手势识别中的作用,并将其作为运动数据的有力替代。我们还克服了光流转换网的局限性,通过交叉模态预训练来初始化我们的模型。大量有前途的研究,解决手术手势识别高度依赖运动数据,这需要额外的记录设备。据我们所知,这是第一篇仅使用密集光流信息处理手术手势识别的论文。我们在Jigsaws数据集上获得了具有竞争力的结果,此外,我们的模型在标准偏差较小的情况下获得了更强大的结果,这表明光学流信息可以作为运动数据的替代品,用于识别手术手势。
URL
https://arxiv.org/abs/1904.01143