Paper Reading AI Learner

Towards Continual Egocentric Activity Recognition: A Multi-modal Egocentric Activity Dataset for Continual Learning

2023-01-26 04:32:00
Linfeng Xu, Qingbo Wu, Lili Pan, Fanman Meng, Hongliang Li, Chiyuan He, Hanxin Wang, Shaoxu Cheng, Yu Dai

Abstract

With the rapid development of wearable cameras, a massive collection of egocentric video for first-person visual perception becomes available. Using egocentric videos to predict first-person activity faces many challenges, including limited field of view, occlusions, and unstable motions. Observing that sensor data from wearable devices facilitates human activity recognition, multi-modal activity recognition is attracting increasing attention. However, the deficiency of related dataset hinders the development of multi-modal deep learning for egocentric activity recognition. Nowadays, deep learning in real world has led to a focus on continual learning that often suffers from catastrophic forgetting. But the catastrophic forgetting problem for egocentric activity recognition, especially in the context of multiple modalities, remains unexplored due to unavailability of dataset. In order to assist this research, we present a multi-modal egocentric activity dataset for continual learning named UESTC-MMEA-CL, which is collected by self-developed glasses integrating a first-person camera and wearable sensors. It contains synchronized data of videos, accelerometers, and gyroscopes, for 32 types of daily activities, performed by 10 participants. Its class types and scale are compared with other publicly available datasets. The statistical analysis of the sensor data is given to show the auxiliary effects for different behaviors. And results of egocentric activity recognition are reported when using separately, and jointly, three modalities: RGB, acceleration, and gyroscope, on a base network architecture. To explore the catastrophic forgetting in continual learning tasks, four baseline methods are extensively evaluated with different multi-modal combinations. We hope the UESTC-MMEA-CL can promote future studies on continual learning for first-person activity recognition in wearable applications.

Abstract (translated)

随着可穿戴相机的快速发展,大规模收集个人主观视频以第一人视觉感知变得可用。使用个人主观视频预测个人活动面临许多挑战,包括有限的的视角、遮挡和不稳定的运动。观察可穿戴设备上的传感器数据有助于人类活动识别,因此多模态活动识别受到越来越多的关注。然而,相关数据集的缺陷妨碍了多模态深度学习和个人主观活动识别的发展。现在,现实世界中的深度学习导致关注持续学习,经常遭受灾难性忘记。但是,个人主观活动识别的灾难性忘记问题,特别是在多种模态的上下文中,仍然未被探索。为了协助这项研究,我们提出了一种多模态个人主观活动数据集,名为 UESTC-MMEA-CL,由自己开发的眼镜集成个人相机和可穿戴传感器收集。它包含视频、加速度计和陀螺仪的同步数据,为32种 daily activities,由10名参与者完成。它的类型和规模与其他公开可用数据集进行了比较。对传感器数据的统计分析展示了不同行为的辅助效果。同时,在分别单独使用和同时使用三种模态:RGB、加速计和陀螺仪的基础网络架构下,报告了个人主观活动识别的结果。为了探索持续学习任务中的灾难性忘记,我们广泛评估了四种基准方法,以不同的多模态组合。我们希望 UESTC-MMEA-CL可以推动未来研究在可穿戴应用中的人主观活动识别持续学习。

URL

https://arxiv.org/abs/2301.10931

PDF

https://arxiv.org/pdf/2301.10931.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot