Abstract
Gait datasets are essential for gait research. However, this paper observes that present benchmarks, whether conventional constrained or emerging real-world datasets, fall short regarding covariate diversity. To bridge this gap, we undertake an arduous 20-month effort to collect a cross-covariate gait recognition (CCGR) dataset. The CCGR dataset has 970 subjects and about 1.6 million sequences; almost every subject has 33 views and 53 different covariates. Compared to existing datasets, CCGR has both population and individual-level diversity. In addition, the views and covariates are well labeled, enabling the analysis of the effects of different factors. CCGR provides multiple types of gait data, including RGB, parsing, silhouette, and pose, offering researchers a comprehensive resource for exploration. In order to delve deeper into addressing cross-covariate gait recognition, we propose parsing-based gait recognition (ParsingGait) by utilizing the newly proposed parsing data. We have conducted extensive experiments. Our main results show: 1) Cross-covariate emerges as a pivotal challenge for practical applications of gait recognition. 2) ParsingGait demonstrates remarkable potential for further advancement. 3) Alarmingly, existing SOTA methods achieve less than 43% accuracy on the CCGR, highlighting the urgency of exploring cross-covariate gait recognition. Link: this https URL.
Abstract (translated)
翻译: 步态数据对于步态研究至关重要。然而,本文观察到,无论是传统的约束数据还是新兴的实时现实数据,都与协方差多样性存在不足。为了填补这一空白,我们进行了长达20个月的艰苦努力,收集了一个跨协方差步态识别(CCGR)数据集。CCGR数据集包括970个受试者,大约1600万条序列;几乎每个受试者都有33个视图和53个不同的协方差。与现有数据集相比,CCGR在人口水平和个体水平上具有多样性。此外,视图和协方差都有良好的标注,使得研究人员可以分析不同因素的影响。CCGR提供了多种类型的步态数据,包括RGB、解析、细枝和姿态,为研究人员提供了一个全面的资源。为了更深入地研究跨协方差步态识别,我们提出了基于解析的步态识别(ParsingGait)方法,利用新提出的解析数据。我们进行了广泛的实验。我们的主要结果表明:1)跨协方差步态识别成为 practical applications of gait recognition的一个关键挑战。 2)ParsingGait 展示了进一步发展的显著潜力。3)令人担忧的是,现有 SOTA 方法在 CCGR 上 achieves less than 43% 的准确率,凸显了探索跨协方差步态识别的紧迫性。链接:https://this URL。
URL
https://arxiv.org/abs/2312.14404