Paper Reading AI Learner

Sensory Glove-Based Surgical Robot User Interface

2024-03-20 19:26:27
Leonardo Borgioli, Ki-Hwan Oh, Alberto Mangano, Alvaro Ducas, Luciano Ambrosini, Federico Pinto, Paula A Lopez, Jessica Cassiani, Milos Zefran, Liaohai Chen, Pier Cristoforo Giulianotti


Robotic surgery has reached a high level of maturity and has become an integral part of standard surgical care. However, existing surgeon consoles are bulky and take up valuable space in the operating room, present challenges for surgical team coordination, and their proprietary nature makes it difficult to take advantage of recent technological advances, especially in virtual and augmented reality. One potential area for further improvement is the integration of modern sensory gloves into robotic platforms, allowing surgeons to control robotic arms directly with their hand movements intuitively. We propose one such system that combines an HTC Vive tracker, a Manus Meta Prime 3 XR sensory glove, and God Vision wireless smart glasses. The system controls one arm of a da Vinci surgical robot. In addition to moving the arm, the surgeon can use fingers to control the end-effector of the surgical instrument. Hand gestures are used to implement clutching and similar functions. In particular, we introduce clutching of the instrument orientation, a functionality not available in the da Vinci system. The vibrotactile elements of the glove are used to provide feedback to the user when gesture commands are invoked. A preliminary evaluation of the system shows that it has excellent tracking accuracy and allows surgeons to efficiently perform common surgical training tasks with minimal practice with the new interface; this suggests that the interface is highly intuitive. The proposed system is inexpensive, allows rapid prototyping, and opens opportunities for further innovations in the design of surgical robot interfaces.

Abstract (translated)

机器人手术已经达到了很高的成熟度,已成为标准手术护理的重要组成部分。然而,现有的外科医生助手显得笨重,占据了操作室宝贵空间,使得手术团队协调存在挑战,并且其专有性使得无法充分利用最近的技术进步,尤其是在虚拟和增强现实技术方面。一个进一步改进的可能领域是在机器人平台上集成现代感测手套,使外科医生可以直接用手部动作直观地控制机器人手臂。我们提出了一个这样的系统,结合了HTC Vive追踪器、Manus Meta Prime 3 XR感官手套和God Vision无线智能眼镜。系统可以控制达芬奇手术机器人的一条手臂。除了移动手臂外,外科医生还可以使用手指控制手术器械的末端。手势用于实现抓握和类似的功能。特别是,我们引入了握持器械方向的功能,这是达芬奇系统所没有的。手套的振动触觉元件用于在发出动作指令时向用户提供反馈。对系统的初步评估表明,它的追踪准确度很高,使外科医生能够通过最小练习使用新界面有效地执行常见手术培训任务;这表明界面非常直观。所提出的系统价格低廉,允许快速原型制作,为手术机器人界面设计的进一步创新提供了机会。



3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot