Paper Reading AI Learner

GaitSADA: Self-Aligned Domain Adaptation for mmWave Gait Recognition

2023-01-31 03:21:08
Ekkasit Pinyoanuntapong (1), Ayman Ali (1), Kalvik Jakkala (1), Pu Wang (1), Minwoo Lee (1), Qucheng Peng (2), Chen Chen (2), Zhi Sun (3) ((1) University of North Carolina at Charlotte, (2) University of Central Florida, (3) Tsinghua University)

Abstract

mmWave radar-based gait recognition is a novel user identification method that captures human gait biometrics from mmWave radar return signals. This technology offers privacy protection and is resilient to weather and lighting conditions. However, its generalization performance is yet unknown and limits its practical deployment. To address this problem, in this paper, a non-synthetic dataset is collected and analyzed to reveal the presence of spatial and temporal domain shifts in mmWave gait biometric data, which significantly impacts identification accuracy. To address this issue, a novel self-aligned domain adaptation method called GaitSADA is proposed. GaitSADA improves system generalization performance by using a two-stage semi-supervised model training approach. The first stage uses semi-supervised contrastive learning and the second stage uses semi-supervised consistency training with centroid alignment. Extensive experiments show that GaitSADA outperforms representative domain adaptation methods by an average of 15.41% in low data regimes.

Abstract (translated)

毫米波雷达基于步进识别是一种新的用户身份识别方法,从毫米波雷达返回信号中捕获人类步进生物特征。该技术提供隐私保护,并对天气和照明条件具有鲁棒性。然而,其泛化性能仍未得到确定,限制了其实际应用。为了解决这一问题,本文收集和分析了一个非合成数据集,以揭示毫米波步进生物特征数据的空间和时间域变化,这些变化显著影响了身份识别准确性。为了解决这一问题,我们提出了一种名为GaitSADA的新自 align 的域适应方法。GaitSADA使用两个阶段的半监督模型训练方法来提高系统的泛化性能。第一阶段使用半监督对比度学习,第二阶段使用半监督一致性训练,并使用中心对齐。广泛的实验表明,GaitSADA在低数据状态下平均领先代表性域适应方法15.41%。

URL

https://arxiv.org/abs/2301.13384

PDF

https://arxiv.org/pdf/2301.13384.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot