Paper Reading AI Learner

Autonomous Robotic Drilling System for Mice Cranial Window Creation

2024-06-20 09:23:23
Enduo Zhao, Murilo M. Marinho, Kanako Harada

Abstract

Robotic assistance for experimental manipulation in the life sciences is expected to enable favorable outcomes, regardless of the skill of the scientist. Experimental specimens in the life sciences are subject to individual variability hence require intricate algorithms for successful autonomous robotic control. As a use case, we are studying the creation of cranial windows in mice. This operation requires the removal of an 8-mm-circular patch of the skull, which is approximately 300 um thick, but the shape and thickness of the mouse skull significantly varies depending on the strain of mouse, sex, and age. In this work, we propose an autonomous robotic drilling method with no offline planning, consisting of a trajectory planning block with execution-time feedback with completion level recognition based on image and force information. The force information allows for completion-level resolution to increase 10 fold. We evaluate the proposed method in two ways. First, in an eggshell drilling task and achieved a success rate of 95% and average drilling time of 7.1 min out of 20 trials. Second, in postmortem mice and with a success rate of 70% and average drilling time of 9.3 min out of 20 trials.

Abstract (translated)

机器人辅助实验操作在生命科学中预计将实现良好的结果,无论科学家的技能如何。生命科学中的实验样品具有个体差异,因此需要复杂的算法来实现成功的自主机器人控制。作为一个用例,我们正在研究小鼠创造颅窗的操作。这项操作需要移除一个8毫米的圆形补丁,约为300微米 thick,但是小鼠的颅骨形状和厚度因品种、性别和年龄而显著不同。在这项工作中,我们提出了一个没有离线计划的自机器人钻孔方法,包括轨迹规划块和基于图像和力信息的有完成级别识别。力信息允许完成级别分辨率增加10倍。我们用两种方式评估了所提出的技术。首先,在蛋壳钻孔任务中,达到95%的成功率和7.1分钟的平均钻孔时间,共20次试验。其次,在死后的小鼠中,达到70%的成功率和9.3分钟的平均钻孔时间,共20次试验。

URL

https://arxiv.org/abs/2406.14135

PDF

https://arxiv.org/pdf/2406.14135.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot