Paper Reading AI Learner

Sparse Wearable Sonomyography Sensor-based Proprioceptive Proportional Control Across Multiple Gestures

2024-03-08 13:38:07
Anne Tryphosa Kamatham, Kavita Sharma, Srikumar Venkataraman, Biswarup Mukherjee

Abstract

Sonomyography (SMG) is a non-invasive technique that uses ultrasound imaging to detect the dynamic activity of muscles. Wearable SMG systems have recently gained popularity due to their potential as human-computer interfaces for their superior performance compared to conventional methods. This paper demonstrates real-time positional proportional control of multiple gestures using a multiplexed 8-channel wearable SMG system. The amplitude-mode ultrasound signals from the SMG system were utilized to detect muscle activity from the forearm of 8 healthy individuals. The derived signals were used to control the on-screen movement of the cursor. A target achievement task was performed to analyze the performance of our SMG-based human-machine interface. Our wearable SMG system provided accurate, stable, and intuitive control in real-time by achieving an average success rate greater than 80% with all gestures. Furthermore, the wearable SMG system's abilities to detect volitional movement and decode movement kinematic information from SMG trajectories using standard performance metrics were evaluated. Our results provide insights to validate SMG as an intuitive human-machine interface.

Abstract (translated)

Sonography(SMG)是一种非侵入性技术,它使用超声成像来检测肌肉的动态活动。可穿戴式SMG系统最近因其在传统方法相比较优越的性能而受到欢迎。本文使用一个多通道可穿戴SMG系统展示了实时位置比例控制多个手势。SMG系统中的幅度模式超声信号用于从8名健康个体的前臂检测肌肉活动。通过这些信号控制的屏幕上鼠标移动。为了分析基于SMG的人机界面性能,进行了一项目标达成任务。我们基于SMG的人机界面系统通过实现所有手势的平均成功率大于80%来提供准确、稳定和直观的控制。此外,可穿戴SMG系统通过使用标准性能指标检测肌肉运动和解码SMG轨迹的运动学信息的能力进行了评估。我们的结果提供了SMG作为直觉式人机界面的验证。

URL

https://arxiv.org/abs/2403.05308

PDF

https://arxiv.org/pdf/2403.05308.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot