Paper Reading AI Learner

At First Contact: Stiffness Estimation Using Vibrational Information for Prosthetic Grasp Modulation

2024-11-27 16:50:42
Anway S. Pimpalkar, Ariel Slepyan, Nitish V. Thakor

Abstract

Stiffness estimation is crucial for delicate object manipulation in robotic and prosthetic hands but remains challenging due to dependence on force and displacement measurement and real-time sensory integration. This study presents a piezoelectric sensing framework for stiffness estimation at first contact during pinch grasps, addressing the limitations of traditional force-based methods. Inspired by human skin, a multimodal tactile sensor that captures vibrational and force data is developed and integrated into a prosthetic hand's fingertip. Machine learning models, including support vector machines and convolutional neural networks, demonstrate that vibrational signals within the critical 15 ms after first contact reliably encode stiffness, achieving classification accuracies up to 98.6\% and regression errors as low as 2.39 Shore A on real-world objects of varying stiffness. Inference times of less than 1.5 ms are significantly faster than the average grasp closure time (16.65 ms in our dataset), enabling real-time stiffness estimation before the object is fully grasped. By leveraging the transient asymmetry in grasp dynamics, where one finger contacts the object before the others, this method enables early grasp modulation, enhancing safety and intuitiveness in prosthetic hands while offering broad applications in robotics.

Abstract (translated)

刚度估计对于机器人和假肢手操作精细物体至关重要,但由于依赖于力和位移的测量以及实时感觉整合,这一过程仍然具有挑战性。本研究提出了一种压电传感框架,用于在夹持抓握时首次接触期间进行刚度估计,以解决传统基于力的方法存在的局限性。受人皮肤启发,开发并集成了一种多模态触觉传感器到假肢手的指尖,该传感器能够捕捉振动和力的数据。通过支持向量机和卷积神经网络等机器学习模型证明,在首次接触后的关键15毫秒内的振动信号可靠地编码了刚度信息,实现了高达98.6%的分类准确率,并且在不同刚度的真实世界物体上,回归误差低至2.39邵氏A。推断时间低于1.5毫秒,显著快于抓握闭合的平均时间(我们的数据集中的平均值为16.65毫秒),这使得可以在完全抓住物体之前进行实时刚度估计。通过利用夹持动力学中的瞬态不对称性,即一个手指先于其他手指接触物体,该方法实现了早期抓握调节,从而提高了假肢手的安全性和直观性,并在机器人领域提供了广泛的应用前景。

URL

https://arxiv.org/abs/2411.18507

PDF

https://arxiv.org/pdf/2411.18507.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot