Paper Reading AI Learner

Towards Continuous Skin Sympathetic Nerve Activity Monitoring: Removing Muscle Noise

2024-10-26 04:10:14
Farnoush Baghestani, Mahdi Pirayesh Shirazi Nejad, Youngsun Kong, Ki H. Chon

Abstract

Continuous monitoring of non-invasive skin sympathetic nerve activity (SKNA) holds promise for understanding the sympathetic nervous system (SNS) dynamics in various physiological and pathological conditions. However, muscle noise artifacts present a challenge in accurate SKNA analysis, particularly in real-life scenarios. This study proposes a deep convolutional neural network (CNN) approach to detect and remove muscle noise from SKNA recordings obtained via ECG electrodes. Twelve healthy participants underwent controlled experimental protocols involving cognitive stress induction and voluntary muscle movements, while collecting SKNA data. Power spectral analysis revealed significant muscle noise interference within the SKNA frequency band (500-1000 Hz). A 2D CNN model was trained on the spectrograms of the data segments to classify them into baseline, stress-induced SKNA, and muscle noise-contaminated periods, achieving an average accuracy of 89.85% across all subjects. Our findings underscore the importance of addressing muscle noise for accurate SKNA monitoring, advancing towards wearable SKNA sensors for real-world applications.

Abstract (translated)

持续监测无创性皮肤交感神经活动(SKNA)有望帮助我们理解在各种生理和病理条件下交感神经系统(SNS)的动态变化。然而,肌肉噪声伪影在准确分析SKNA时构成了一大挑战,尤其是在现实场景中。本研究提出采用深度卷积神经网络(CNN)的方法来检测并去除通过ECG电极收集到的SKNA记录中的肌肉噪声。十二名健康受试者参与了涉及认知压力诱发和自愿肌肉运动控制实验协议,并在此期间采集了SKNA数据。功率谱分析显示,在500-1000 Hz的SKNA频率范围内存在显著的肌肉噪声干扰。研究团队训练了一个2D CNN模型,通过对数据段的频谱图进行分类来识别基线、由压力诱发的SKNA以及受肌肉噪声污染的时间段,所有受试者平均准确率达到89.85%。我们的发现强调了在实现精确SKNA监测方面解决肌肉噪声问题的重要性,并朝着开发适用于现实世界应用的可穿戴SKNA传感器迈进了一步。

URL

https://arxiv.org/abs/2410.21319

PDF

https://arxiv.org/pdf/2410.21319.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot