Abstract
With the advance of technologies, machines are increasingly present in people's daily lives. Thus, there has been more and more effort for developing interfaces, such as dynamic gestures, that provide an intuitive way of interaction. Currently, the most common trend is to use multimodal data, as depth and skeleton information, to try to recognize dynamic gestures. However, the use of only color information would be more interesting, once RGB cameras are usually found in almost every public place, and could be used for gesture recognition without the need to install other equipment. The main problem with this approach is the difficulty of representing spatio-temporal information using just color. With this in mind, we propose a technique that we called Star RGB, capable of describing a videoclip containing a dynamic gesture as an RGB image. This image is then passed to a classifier formed by two Resnet CNN's, a soft-attention ensemble, and a multilayer perceptron, which returns the predicted class label that indicates to which type of gesture the input video belongs. Experiments were carried out using the Montalbano and GRIT datasets. On the Montalbano dataset, the proposed approach achieved an accuracy of 94.58%, this result reaches the state-of-the-art using this dataset, considering only color information. On the GRIT dataset, our proposal achieves more than 98% of accuracy, recall, precision, and F1-score, outperforming the reference approach in more than 6%.
Abstract (translated)
随着技术的进步,机器越来越多地出现在人们的日常生活中。因此,开发提供直观交互方式的界面(如动态手势)的工作越来越多。目前,最常见的趋势是使用多模式数据,作为深度和骨架信息,尝试识别动态手势。然而,只要在几乎所有公共场所都能找到RGB相机,而且无需安装其他设备就可以用于手势识别,那么只使用颜色信息就更有趣了。这种方法的主要问题是很难只用颜色来表示时空信息。考虑到这一点,我们提出了一种称为star-rgb的技术,能够将包含动态手势的视频剪辑描述为rgb图像。然后将此图像传递给由两个resnet cnn、一个软注意力集合和一个多层感知器组成的分类器,该分类器返回预测类标签,指示输入视频属于哪种类型的手势。使用蒙塔尔巴诺和砂砾数据集进行了实验。在Montalbano数据集上,所提出的方法达到了94.58%的精度,这一结果达到了使用该数据集的最先进水平,只考虑颜色信息。在砂砾数据集上,我们的建议达到了98%以上的准确度、召回率、准确度和F1分数,超过了参考方法的6%。
URL
https://arxiv.org/abs/1904.08505