Abstract
Approaching the era of ubiquitous computing, human motion sensing plays a crucial role in smart systems for decision making, user interaction, and personalized services. Extensive research has been conducted on human tracking, pose estimation, gesture recognition, and activity recognition, which are predominantly based on cameras in traditional methods. However, the intrusive nature of cameras limits their use in smart home applications. To address this, mmWave radars have gained popularity due to their privacy-friendly features. In this work, we propose \textit{milliFlow}, a novel deep learning method for scene flow estimation as a complementary motion information for mmWave point cloud, serving as an intermediate level of features and directly benefiting downstream human motion sensing tasks. Experimental results demonstrate the superior performance of our method with an average 3D endpoint error of 4.6cm, significantly surpassing the competing approaches. Furthermore, by incorporating scene flow information, we achieve remarkable improvements in human activity recognition, human parsing, and human body part tracking. To foster further research in this area, we provide our codebase and dataset for open access.
Abstract (translated)
随着无处不在计算时代的到来,人类运动感知在智能系统中扮演着关键角色,用于决策、用户交互和个性化服务。 extensive research has been conducted on human tracking、姿势估计、手势识别和活动识别,这些传统方法中主要基于相机。 However,相机的侵入性性质限制了它们在智能家居应用中的使用。 To address this,毫米波雷达因其隐私友好特性而变得越来越受欢迎。 In this work, we propose \textit{milliFlow}, a novel deep learning method for scene flow estimation, as a complementary motion information for mmWave point cloud, serving as an intermediate level of features and directly benefits downstream human motion sensing tasks. Experimental results demonstrate the superior performance of our method with an average 3D endpoint error of 4.6cm, significantly surpassing the competing approaches. Furthermore, by incorporating scene flow information, we achieve remarkable improvements in human activity recognition、人类解析和人体部位追踪。 To foster further research in this area, we provide our codebase and dataset for open access.
URL
https://arxiv.org/abs/2306.17010