Abstract
The absence of openly accessible data and specialized foundation models is a major barrier for computational research in surgery. Toward this, (i) we open-source the largest dataset of general surgery videos to-date, consisting of 680 hours of surgical videos, including data from robotic and laparoscopic techniques across 28 procedures; (ii) we propose a technique for video pre-training a general surgery vision transformer (GSViT) on surgical videos based on forward video prediction that can run in real-time for surgical applications, toward which we open-source the code and weights of GSViT; (iii) we also release code and weights for procedure-specific fine-tuned versions of GSViT across 10 procedures; (iv) we demonstrate the performance of GSViT on the Cholec80 phase annotation task, displaying improved performance over state-of-the-art single frame predictors.
Abstract (translated)
手术计算研究的一个主要障碍是公开可访问的数据和专门基础模型缺失。为了克服这一障碍,我们:(一)开源了迄今为止最大的普通手术视频数据集,包括28个手术过程的机器人技术和腹腔镜技术数据,总时长为680小时;(二)我们提出了一个基于向前视频预测的视频预训练方法,可以运行在实时手术应用中,向量包括GSViT的前向预测;(三)我们还发布了10个手术过程的特定微调版本GSViT的代码和权重;(四)我们在Cholec80阶段注释任务上展示了GSViT的性能,显示了与最先进的单帧预测器相比的明显改进。
URL
https://arxiv.org/abs/2403.05949