Paper Reading AI Learner

Recognizing American Sign Language Manual Signs from RGB-D Videos

2019-06-07 00:56:11
Longlong Jing, Elahe Vahdani, Matt Huenerfauth, Yingli Tian
           

Abstract

In this paper, we propose a 3D Convolutional Neural Network (3DCNN) based multi-stream framework to recognize American Sign Language (ASL) manual signs (consisting of movements of the hands, as well as non-manual face movements in some cases) in real-time from RGB-D videos, by fusing multimodality features including hand gestures, facial expressions, and body poses from multi-channels (RGB, depth, motion, and skeleton joints). To learn the overall temporal dynamics in a video, a proxy video is generated by selecting a subset of frames for each video which are then used to train the proposed 3DCNN model. We collect a new ASL dataset, ASL-100-RGBD, which contains 42 RGB-D videos captured by a Microsoft Kinect V2 camera, each of 100 ASL manual signs, including RGB channel, depth maps, skeleton joints, face features, and HDface. The dataset is fully annotated for each semantic region (i.e. the time duration of each word that the human signer performs). Our proposed method achieves 92.88 accuracy for recognizing 100 ASL words in our newly collected ASL-100-RGBD dataset. The effectiveness of our framework for recognizing hand gestures from RGB-D videos is further demonstrated on the Chalearn IsoGD dataset and achieves 76 accuracy which is 5.51 higher than the state-of-the-art work in terms of average fusion by using only 5 channels instead of 12 channels in the previous work.

Abstract (translated)

本文提出了一种基于三维卷积神经网络(3DCN)的多流框架,通过融合手势、面部表情等多种多样性特征,实时识别RGB-D视频中的美国手语(ASL)手势(包括手的运动,在某些情况下还包括非手的面部运动)。多通道(RGB、深度、运动和骨骼关节)的NS和身体姿势。为了了解视频中的整体时间动态,通过为每个视频选择帧的子集,生成一个代理视频,然后使用该子集训练提议的3DCN模型。我们收集了一个新的ASL数据集asl-100-rgb d,其中包含由Microsoft Kinect v2相机拍摄的42个rgb-d视频,每个100个ASL手动标志,包括rgb通道、深度地图、骨骼关节、面部特征和hdface。数据集针对每个语义区域(即人类签名者执行的每个单词的持续时间)进行了完全注释。我们提出的方法在我们新收集的asl-100-rgbd数据集中识别100个asl单词的精度达到了92.88。我们在Chalearn Isogd数据集上进一步展示了我们识别RGB-D视频手势的框架的有效性,通过在之前的工作中仅使用5个通道而不是12个通道,实现了76个精度,这比最先进的工作平均融合高5.51。

URL

https://arxiv.org/abs/1906.02851

PDF

https://arxiv.org/pdf/1906.02851.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot