Paper Reading AI Learner

Safe Start Regions for Medical Steerable Needle Automation

2024-04-12 15:56:08
Janine Hoelscher, Inbar Fried, Spiros Tsalikis, Jason Akulian, Robert J. Webster III, Ron Alterovitz

Abstract

Steerable needles are minimally invasive devices that enable novel medical procedures by following curved paths to avoid critical anatomical obstacles. Planning algorithms can be used to find a steerable needle motion plan to a target. Deployment typically consists of a physician manually inserting the steerable needle into tissue at the motion plan's start pose and handing off control to a robot, which then autonomously steers it to the target along the plan. The handoff between human and robot is critical for procedure success, as even small deviations from the start pose change the steerable needle's workspace and there is no guarantee that the target will still be reachable. We introduce a metric that evaluates the robustness to such start pose deviations. When measuring this robustness to deviations, we consider the tradeoff between being robust to changes in position versus changes in orientation. We evaluate our metric through simulation in an abstract, a liver, and a lung planning scenario. Our evaluation shows that our metric can be combined with different motion planners and that it efficiently determines large, safe start regions.

Abstract (translated)

可操纵的针是一种最小侵入性的设备,通过遵循弯曲路径来避开关键解剖结构,从而实现新型医疗程序。规划算法可用于找到可操纵针的运动计划到目标。部署通常包括医生在运动计划开始时手动将可操纵针插入组织,然后将控制交给机器人,它沿着计划自主地将针引导到目标。人机之间的接管对于手术成功至关重要,因为即使是最小的始位置偏差也会改变可操纵针的工作区,而且无法保证目标仍然可达。我们引入了一个评估这种始位置偏差稳健性的指标。在评估这种偏差时,我们考虑了在位置变化和方向变化之间的权衡。我们在抽象、肝脏和肺规划场景中通过仿真来评估我们的指标。我们的评估表明,我们的指标可以与不同运动规划相结合,并能有效地确定大而安全的始位置区域。

URL

https://arxiv.org/abs/2404.08558

PDF

https://arxiv.org/pdf/2404.08558.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot