Abstract
Accurate 3D human pose estimation is a challenging task due to occlusion and depth ambiguity. In this paper, we introduce a multi-hop graph transformer network designed for 2D-to-3D human pose estimation in videos by leveraging the strengths of multi-head self-attention and multi-hop graph convolutional networks with disentangled neighborhoods to capture spatio-temporal dependencies and handle long-range interactions. The proposed network architecture consists of a graph attention block composed of stacked layers of multi-head self-attention and graph convolution with learnable adjacency matrix, and a multi-hop graph convolutional block comprised of multi-hop convolutional and dilated convolutional layers. The combination of multi-head self-attention and multi-hop graph convolutional layers enables the model to capture both local and global dependencies, while the integration of dilated convolutional layers enhances the model's ability to handle spatial details required for accurate localization of the human body joints. Extensive experiments demonstrate the effectiveness and generalization ability of our model, achieving competitive performance on benchmark datasets.
Abstract (translated)
准确的人体姿态估计是一个具有遮挡和深度不确定性挑战的任务。在本文中,我们提出了一种用于2D到3D人体姿态估计的视频的多级图卷积网络,通过利用多头自注意力和多级图卷积网络的优势来捕捉空间-时间依赖关系并处理长距离相互作用。所提出的网络架构由一个由多头自注意力和图卷积组成的图注意力模块和一个由多级卷积和扩散卷积组成的图卷积模块组成。多头自注意力和多级图卷积层的结合使模型能够捕捉局部和全局依赖关系,而整合扩散卷积层增强了模型处理人体关节准确定位所需的空间细节的能力。大量实验证明了我们模型的有效性和泛化能力,在基准数据集上实现了竞争力的性能。
URL
https://arxiv.org/abs/2405.03055