Paper Reading AI Learner

Deep Reinforcement Learning for Concentric Tube Robot Path Planning

2023-01-22 17:11:54
Keshav Iyengar, Sarah Spurgeon, Danail Stoyanov

Abstract

As surgical interventions trend towards minimally invasive approaches, Concentric Tube Robots (CTRs) have been explored for various interventions such as brain, eye, fetoscopic, lung, cardiac and prostate surgeries. Arranged concentrically, each tube is rotated and translated independently to move the robot end-effector position, making kinematics and control challenging. Classical model-based approaches have been previously investigated with developments in deep learning based approaches outperforming more classical approaches in both forward kinematics and shape estimation. We propose a deep reinforcement learning approach to control where we generalise across two to four systems, an element not yet achieved in any other deep learning approach for CTRs. In this way we explore the likely robustness of the control approach. Also investigated is the impact of rotational constraints applied on tube actuation and the effects on error metrics. We evaluate inverse kinematics errors and tracking error for path following tasks and compare the results to those achieved using state of the art methods. Additionally, as current results are performed in simulation, we also investigate a domain transfer approach known as domain randomization and evaluate error metrics as an initial step towards hardware implementation. Finally, we compare our method to a Jacobian approach found in literature.

Abstract (translated)

随着 surgical 干预趋势向最小化侵入性方法发展,连续 tube 机器人(CTRs)已被探索用于各种干预,例如大脑、眼睛、视觉检查、心脏手术、肺、心脏和前列腺癌手术等。按照同心圆排列,每个 tube independently 旋转和移动以移动机器人末端执行器位置,使得 Kinematics 和控制变得困难。以前,对经典模型-based 方法进行了研究,深度学习方法的发展在 forward kinematics 和形状估计方面比更经典的方法更有效。我们提议一种深度强化学习的方法来控制,其中我们泛化 across 2 到 4 系统,这是 CTRs 在其他深度学习方法中尚未实现的要素。通过这种方式,我们探索了控制方法的可能鲁棒性。此外,我们还研究了对 tube 操作施加的旋转限制对错误度量的影响,并评估了路径跟随任务中的逆 Kinematics 错误和跟踪错误,并将结果与使用先进技术方法获得的的结果进行比较。此外,由于当前结果在模拟中进行,我们还研究了称为 domain 随机化的方法,并评估了错误度量作为硬件实现的第一步。最后,我们将我们的方法和文献中提出的雅可比方法进行比较。

URL

https://arxiv.org/abs/2301.09162

PDF

https://arxiv.org/pdf/2301.09162.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot