Abstract
Human actions comprise of joint motion of articulated body parts or `gestures'. Human skeleton is intuitively represented as a sparse graph with joints as nodes and natural connections between them as edges. Graph convolutional networks have been used to recognize actions from skeletal videos. We introduce a part-based graph convolutional network (PB-GCN) for this task, inspired by Deformable Part-based Models (DPMs). We divide the skeleton graph into four subgraphs with joints shared across them and learn a recognition model using a part-based graph convolutional network. We show that such a model improves performance of recognition, compared to a model using entire skeleton graph. Instead of using 3D joint coordinates as node features, we show that using relative coordinates and temporal displacements boosts performance. Our model achieves state-of-the-art performance on two challenging benchmark datasets NTURGB+D and HDM05, for skeletal action recognition.
Abstract (translated)
人类行为包括关节体部位或“手势”的关节运动。人体骨骼直观地表示为稀疏图形,其中关节为节点,它们之间的自然连接为边缘。图形卷积网络已被用于识别来自骨架视频的动作。我们为此任务引入了基于部件的图形卷积网络(PB-GCN),其灵感来自可变形部件模型(DPM)。我们将骨架图划分为四个子图,并在它们之间共享关节,并使用基于部件的图卷积网络学习识别模型。我们表明,与使用整个骨架图的模型相比,这样的模型提高了识别性能。我们使用相对坐标和时间位移来提高性能,而不是使用3D关节坐标作为节点特征。我们的模型在两个具有挑战性的基准数据集NTURGB + D和HDM05上实现了最先进的性能,用于骨骼动作识别。
URL
https://arxiv.org/abs/1809.04983