Abstract
Physics-based simulations have accelerated progress in robot learning for driving, manipulation, and locomotion. Yet, a fast, accurate, and robust surgical simulation environment remains a challenge. In this paper, we present ORBIT-Surgical, a physics-based surgical robot simulation framework with photorealistic rendering in NVIDIA Omniverse. We provide 14 benchmark surgical tasks for the da Vinci Research Kit (dVRK) and Smart Tissue Autonomous Robot (STAR) which represent common subtasks in surgical training. ORBIT-Surgical leverages GPU parallelization to train reinforcement learning and imitation learning algorithms to facilitate study of robot learning to augment human surgical skills. ORBIT-Surgical also facilitates realistic synthetic data generation for active perception tasks. We demonstrate ORBIT-Surgical sim-to-real transfer of learned policies onto a physical dVRK robot. Project website: this http URL
Abstract (translated)
基于物理的机器人学习在驾驶、操作和移动方面已经取得了进展。然而,快速、准确和稳健的手术模拟环境仍然是一个挑战。在本文中,我们提出了ORBIT-Surgical,一个基于物理的手术机器人模拟框架,在NVIDIA Omniverse中实现光栅化渲染。我们为达芬奇研究工具包(dVRK)和智能组织自主机器人(STAR)提供了14个基准手术任务,这些任务代表了手术训练中常见的子任务。ORBIT-Surgical利用GPU并行训练强化学习和模仿学习算法,以促进研究机器人学习以提高人类手术技能。ORBIT-Surgical还促进了真实合成数据生成,用于主动感知任务。我们证明了ORBIT-Surgical将学习到的策略在物理dVRK机器人上实现模拟-到-实转。项目网站:这个链接
URL
https://arxiv.org/abs/2404.16027