Paper Reading AI Learner

Ultrasound Based Prosthetic Arm Control

2023-01-31 17:53:16
Ayush Singh, Harikrishnan Pisharody Gopalkrishnan, Mahesh Raveendranatha Panicker

Abstract

The loss of an upper limb can have a substantial impact on a person's quality of life since it limits a person's ability to work, interact, and perform daily duties independently. Artificial limbs are used in prosthetics to help people who have lost limbs enhance their function and quality of life. Despite significant breakthroughs in prosthetic technology, rejection rates for complex prosthetic devices remain high[1]-[5]. A quarter to a third of upper-limb amputees abandon their prosthetics due to a lack of comprehension of the technology. The most extensively used method for monitoring muscle activity and regulating the prosthetic arm, surface electromyography (sEMG), has significant drawbacks, including a low signal-to-noise ratio and poor amplitude resolution[6]-[8].Unlike myoelectric control systems, which use electrical muscle activation to calculate end-effector velocity, our strategy employs ultrasound to directly monitor mechanical muscle deformation and then uses the extracted signals to proportionally control end-effector location. This investigation made use of four separate hand motions performed by three physically healthy volunteers. A virtual robotic hand simulation was created using ROS. After witnessing performance comparable to that of a hand with very less training, we concluded that our control method is reliable and natural.

Abstract (translated)

失去一手或多手可以对人的生活产生实质性影响,因为限制一个人工作、互动和独立执行日常任务的能力。人工肢体在助行器中用于帮助失去肢体的人增强功能和生活质量。尽管在助行技术方面取得了重大进展,但对于复杂的助行设备 rejection 率仍然很高[1]-[5]。四分之一到三的三分之一的一手或多手 amputee 放弃了他们的助行器,因为对技术的理解不足。最常用的方法用于监测肌肉活动和调节助行器手臂的方法之一是表面电学(sEMG),但它具有严重的缺点,包括低信号到噪声比和差分放大[6]-[8]。与肌肉电刺激控制系统不同,该策略使用电子肌肉激活来计算终端速度,但我们的策略使用超声波来直接监测机械肌肉变形,然后使用提取的信号以按比例控制终端位置。本研究使用了由三个身体健康志愿者表演的四种不同的手部动作。使用 ROS 创造了一个虚拟机器人手模拟。观察到了表现类似于缺乏训练的手部的表现后,我们得出结论,我们的控制方法可靠且自然。

URL

https://arxiv.org/abs/2301.13809

PDF

https://arxiv.org/pdf/2301.13809.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot