Abstract
Skeleton-based gesture recognition methods have achieved high success using Graph Convolutional Network (GCN). In addition, context-dependent adaptive topology as a neighborhood vertex information and attention mechanism leverages a model to better represent actions. In this paper, we propose self-attention GCN hybrid model, Multi-Scale Spatial-Temporal self-attention (MSST)-GCN to effectively improve modeling ability to achieve state-of-the-art results on several datasets. We utilize spatial self-attention module with adaptive topology to understand intra-frame interactions within a frame among different body parts, and temporal self-attention module to examine correlations between frames of a node. These two are followed by multi-scale convolution network with dilations, which not only captures the long-range temporal dependencies of joints but also the long-range spatial dependencies (i.e., long-distance dependencies) of node temporal behaviors. They are combined into high-level spatial-temporal representations and output the predicted action with the softmax classifier.
Abstract (translated)
使用基于骨架的手势识别方法已经取得了很高的成功,这是通过图卷积网络(GCN)实现的。此外,上下文相关的自适应拓扑作为邻居顶点信息和注意机制,利用模型更好地表示动作。在本文中,我们提出了自注意力GCN混合模型,多尺度空间时间自注意力(MSST)-GCN,以有效地提高建模能力,实现在这些数据集上的最先进结果。我们利用自注意力模块与自适应拓扑来理解不同身体部分之间的帧内交互,并利用时间自注意力模块来研究节点的时间行为之间的关联。接着是多尺度卷积网络与膨胀操作,不仅捕捉了关节的长距离时间依赖,还捕捉了节点的时间行为的空间距离(即远距离依赖)。它们被组合成高级空间时间表示,并通过软分类器输出预测的动作。
URL
https://arxiv.org/abs/2404.02624