Abstract
In this paper, a novel Diffusion-based 3D Pose estimation (D3DP) method with Joint-wise reProjection-based Multi-hypothesis Aggregation (JPMA) is proposed for probabilistic 3D human pose estimation. On the one hand, D3DP generates multiple possible 3D pose hypotheses for a single 2D observation. It gradually diffuses the ground truth 3D poses to a random distribution, and learns a denoiser conditioned on 2D keypoints to recover the uncontaminated 3D poses. The proposed D3DP is compatible with existing 3D pose estimators and supports users to balance efficiency and accuracy during inference through two customizable parameters. On the other hand, JPMA is proposed to assemble multiple hypotheses generated by D3DP into a single 3D pose for practical use. It reprojects 3D pose hypotheses to the 2D camera plane, selects the best hypothesis joint-by-joint based on the reprojection errors, and combines the selected joints into the final pose. The proposed JPMA conducts aggregation at the joint level and makes use of the 2D prior information, both of which have been overlooked by previous approaches. Extensive experiments on Human3.6M and MPI-INF-3DHP datasets show that our method outperforms the state-of-the-art deterministic and probabilistic approaches by 1.5% and 8.9%, respectively. Code is available at this https URL.
Abstract (translated)
在本文中,我们提出了一种基于扩散的3D姿态估计方法(D3DP),并结合Joint-wise重投影多假设聚合(JPMA)来实现Probabilistic 3D人类姿态估计。一方面,D3DP生成了单个2D观察下多个可能的3D姿态假设。它逐渐扩散 ground truth 3D姿态到随机分布,并学习基于2D关键点的混变去除器以恢复无混变3D姿态。我们提出的D3DP与现有的3D姿态估计器兼容,并支持用户在推理过程中通过两个可自定义参数平衡效率和准确性。另一方面,我们提出了JPMA,将其由D3DP生成的多个假设组装成一个单一的3D姿态,以便实际应用。它将3D姿态假设重投影到2D相机平面上,根据重投影误差选择最佳的Joint-by-Joint假设,并将它们组合成最终的3D姿态。我们提出的JPMA在Joint level上进行聚合,并利用先前方法忽视的2D先验信息。在对人类3.6M和MPI-INF-3DHP数据集进行的广泛实验中,我们发现,我们的方法分别比最先进的确定性和概率方法高出1.5%和8.9%。代码在此https URL上可用。
URL
https://arxiv.org/abs/2303.11579