Abstract
Gait, the walking pattern of individuals, is one of the most important biometrics modalities. Most of the existing gait recognition methods take silhouettes or articulated body models as the gait features. These methods suffer from degraded recognition performance when handling confounding variables, such as clothing, carrying and view angle. To remedy this issue, we propose a novel AutoEncoder framework to explicitly disentangle pose and appearance features from RGB imagery and the LSTM-based integration of pose features over time produces the gait feature. In addition, we collect a Frontal-View Gait (FVG) dataset to focus on gait recognition from frontal-view walking, which is a challenging problem since it contains minimal gait cues compared to other views. FVG also includes other important variations, e.g., walking speed, carrying, and clothing. With extensive experiments on CASIA-B, USF and FVG datasets, our method demonstrates superior performance to the state of the arts quantitatively, the ability of feature disentanglement qualitatively, and promising computational efficiency.
Abstract (translated)
步态作为个体的行走方式,是最重要的生物特征识别手段之一。现有的步态识别方法大多以轮廓或关节体模型为步态特征。这些方法在处理诸如服装、携带和视角等混淆变量时,会降低识别性能。为了解决这个问题,我们提出了一个新的自动编码器框架来明确地从RGB图像中分离姿态和外观特征,并且基于LSTM的姿态特征随时间的集成产生了步态特征。此外,我们还收集了一个前视图步态(fvg)数据集,集中于前视图步行的步态识别,这是一个具有挑战性的问题,因为与其他视图相比,它包含的步态提示最少。fvg还包括其他重要的变化,例如行走速度、携带和衣物。通过对CASIA-B、USF和FVG数据集的大量实验,我们的方法定量地证明了其优越的性能、定性的特征解纠缠能力和良好的计算效率。
URL
https://arxiv.org/abs/1904.04925