Paper Reading AI Learner

Computational ergonomics for task delegation in Human-Robot Collaboration: spatiotemporal adaptation of the robot to the human through contactless gesture recognition

2022-03-21 14:23:00
Brenda Elizabeth Olivas-Padilla, Dimitris Papanagiotou, Gavriela Senteri, Sotiris Manitsaris, Alina Glushkova

Abstract

The high prevalence of work-related musculoskeletal disorders (WMSDs) could be addressed by optimizing Human-Robot Collaboration (HRC) frameworks for manufacturing applications. In this context, this paper proposes two hypotheses for ergonomically effective task delegation and HRC. The first hypothesis states that it is possible to quantify ergonomically professional tasks using motion data from a reduced set of sensors. Then, the most dangerous tasks can be delegated to a collaborative robot. The second hypothesis is that by including gesture recognition and spatial adaptation, the ergonomics of an HRC scenario can be improved by avoiding needless motions that could expose operators to ergonomic risks and by lowering the physical effort required of operators. An HRC scenario for a television manufacturing process is optimized to test both hypotheses. For the ergonomic evaluation, motion primitives with known ergonomic risks were modeled for their detection in professional tasks and to estimate a risk score based on the European Assembly Worksheet (EAWS). A Deep Learning gesture recognition module trained with egocentric television assembly data was used to complement the collaboration between the human operator and the robot. Additionally, a skeleton-tracking algorithm provided the robot with information about the operator's pose, allowing it to spatially adapt its motion to the operator's anthropometrics. Three experiments were conducted to determine the effect of gesture recognition and spatial adaptation on the operator's range of motion. The rate of spatial adaptation was used as a key performance indicator (KPI), and a new KPI for measuring the reduction in the operator's motion is presented in this paper.

Abstract (translated)

URL

https://arxiv.org/abs/2203.11007

PDF

https://arxiv.org/pdf/2203.11007.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot