Paper Reading AI Learner

Dynamic Gesture Recognition by Using CNNs and Star RGB: a Temporal Information Condensation

2019-04-10 00:39:32
Clebeson Canuto dos Santos, Jorge Leonid Aching Samatelo, Raquel Frizera Vassallo

Abstract

With the advance of technologies, machines are increasingly present in people's daily lives. Thus, there has been more and more effort for developing interfaces, such as dynamic gestures, that provide an intuitive way of interaction. Currently, the most common trend is to use multimodal data, as depth and skeleton information, to try to recognize dynamic gestures. However, the use of only color information would be more interesting, once RGB cameras are usually found in almost every public place, and could be used for gesture recognition without the need to install other equipment. The main problem with this approach is the difficulty of representing spatio-temporal information using just color. With this in mind, we propose a technique that we called Star RGB, capable of describing a videoclip containing a dynamic gesture as an RGB image. This image is then passed to a classifier formed by two Resnet CNN's, a soft-attention ensemble, and a multilayer perceptron, which returns the predicted class label that indicates to which type of gesture the input video belongs. Experiments were carried out using the Montalbano and GRIT datasets. On the Montalbano dataset, the proposed approach achieved an accuracy of 94.58%, this result reaches the state-of-the-art using this dataset, considering only color information. On the GRIT dataset, our proposal achieves more than 98% of accuracy, recall, precision, and F1-score, outperforming the reference approach in more than 6%.

Abstract (translated)

随着技术的进步,机器越来越多地出现在人们的日常生活中。因此,开发提供直观交互方式的界面(如动态手势)的工作越来越多。目前,最常见的趋势是使用多模式数据,作为深度和骨架信息,尝试识别动态手势。然而,只要在几乎所有公共场所都能找到RGB相机,而且无需安装其他设备就可以用于手势识别,那么只使用颜色信息就更有趣了。这种方法的主要问题是很难只用颜色来表示时空信息。考虑到这一点,我们提出了一种称为star-rgb的技术,能够将包含动态手势的视频剪辑描述为rgb图像。然后将此图像传递给由两个resnet cnn、一个软注意力集合和一个多层感知器组成的分类器,该分类器返回预测类标签,指示输入视频属于哪种类型的手势。使用蒙塔尔巴诺和砂砾数据集进行了实验。在Montalbano数据集上,所提出的方法达到了94.58%的精度,这一结果达到了使用该数据集的最先进水平,只考虑颜色信息。在砂砾数据集上,我们的建议达到了98%以上的准确度、召回率、准确度和F1分数,超过了参考方法的6%。

URL

https://arxiv.org/abs/1904.08505

PDF

https://arxiv.org/pdf/1904.08505.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot