Abstract
Gait is one of the most promising biometrics that aims to identify pedestrians from their walking patterns. However, prevailing methods are susceptible to confounders, resulting in the networks hardly focusing on the regions that reflect effective walking patterns. To address this fundamental problem in gait recognition, we propose a Generative Counterfactual Intervention framework, dubbed GaitGCI, consisting of Counterfactual Intervention Learning (CIL) and Diversity-Constrained Dynamic Convolution (DCDC). CIL eliminates the impacts of confounders by maximizing the likelihood difference between factual/counterfactual attention while DCDC adaptively generates sample-wise factual/counterfactual attention to efficiently perceive the sample-wise properties. With matrix decomposition and diversity constraint, DCDC guarantees the model to be efficient and effective. Extensive experiments indicate that proposed GaitGCI: 1) could effectively focus on the discriminative and interpretable regions that reflect gait pattern; 2) is model-agnostic and could be plugged into existing models to improve performance with nearly no extra cost; 3) efficiently achieves state-of-the-art performance on arbitrary scenarios (in-the-lab and in-the-wild).
Abstract (translated)
步态识别是最有前途的生物学特征之一,旨在从步行模式中识别行人。然而,当前的方法容易受到混淆的影响,导致网络很难关注反映有效步行模式的区域。为了解决步态识别中的 fundamental problem,我们提出了一种生成反对义词干预框架,称为 GaitGCI,由反对义词干预学习(CIL)和多样性限制的动态聚合(DCDC)组成。CIL 通过最大化事实/反对义词注意之间的 likelihood 差异来消除混淆的影响,而 DCDC 自适应地生成样本wise 事实/反对义词注意,以高效地感知样本wise 特性。通过矩阵分解和多样性限制,DCDC 保证模型高效且有效。广泛的实验表明,提出的 GaitGCI 可以:1)有效地关注反映步态模式的可辨别和可解释区域;2)具有模型无关性,可以与现有的模型集成来提高性能,几乎不需要额外的成本;3)高效地实现任意场景(实验室和野生)的先进技术表现。
URL
https://arxiv.org/abs/2306.03428