Abstract
Due to the compact and rich high-level representations offered, skeleton-based human action recognition has recently become a highly active research topic. Previous studies have demonstrated that investigating joint relationships in spatial and temporal dimensions provides effective information critical to action recognition. However, effectively encoding global dependencies of joints during spatio-temporal feature extraction is still challenging. In this paper, we introduce Action Capsule which identifies action-related key joints by considering the latent correlation of joints in a skeleton sequence. We show that, during inference, our end-to-end network pays attention to a set of joints specific to each action, whose encoded spatio-temporal features are aggregated to recognize the action. Additionally, the use of multiple stages of action capsules enhances the ability of the network to classify similar actions. Consequently, our network outperforms the state-of-the-art approaches on the N-UCLA dataset and obtains competitive results on the NTURGBD dataset. This is while our approach has significantly lower computational requirements based on GFLOPs measurements.
Abstract (translated)
由于提供了紧凑和丰富的高层次表示,基于骨骼的人动识别最近成为了一个非常活跃的研究领域。先前的研究已经证明,在空间和时间维度上研究关节关系提供了对于动作识别至关重要的信息。然而,在时间和空间特征提取期间有效地编码关节的全球依赖仍然是一项具有挑战性的任务。在本文中,我们介绍了动作胶囊,它考虑了骨骼序列中的关节潜在相关性,以识别与动作相关的关键关节。我们证明,在推理期间,我们的端到端网络关注每个动作特定的一组关节,其编码的时间和空间特征被聚合以识别动作。此外,使用多个动作胶囊阶段增强网络的分类能力。因此,我们的网络在N-UCLA数据集上比最先进的方法表现更好,并在NturGBD数据集上获得了竞争的结果,与此同时,我们的方法基于GFLOPs测量的方法具有显著更低的计算要求。
URL
https://arxiv.org/abs/2301.13090