Paper Reading AI Learner

LiGAR: LiDAR-Guided Hierarchical Transformer for Multi-Modal Group Activity Recognition

2024-10-28 15:11:49
Naga Venkata Sai Raviteja Chappa, Khoa Luu

Abstract

Group Activity Recognition (GAR) remains challenging in computer vision due to the complex nature of multi-agent interactions. This paper introduces LiGAR, a LIDAR-Guided Hierarchical Transformer for Multi-Modal Group Activity Recognition. LiGAR leverages LiDAR data as a structural backbone to guide the processing of visual and textual information, enabling robust handling of occlusions and complex spatial arrangements. Our framework incorporates a Multi-Scale LIDAR Transformer, Cross-Modal Guided Attention, and an Adaptive Fusion Module to integrate multi-modal data at different semantic levels effectively. LiGAR's hierarchical architecture captures group activities at various granularities, from individual actions to scene-level dynamics. Extensive experiments on the JRDB-PAR, Volleyball, and NBA datasets demonstrate LiGAR's superior performance, achieving state-of-the-art results with improvements of up to 10.6% in F1-score on JRDB-PAR and 5.9% in Mean Per Class Accuracy on the NBA dataset. Notably, LiGAR maintains high performance even when LiDAR data is unavailable during inference, showcasing its adaptability. Our ablation studies highlight the significant contributions of each component and the effectiveness of our multi-modal, multi-scale approach in advancing the field of group activity recognition.

Abstract (translated)

群体活动识别(GAR)在计算机视觉领域仍然是一项挑战,这主要是因为多智能体交互的复杂性。本文介绍了LiGAR,这是一种基于LIDAR引导的分层转换器,用于多模态群体活动识别。LiGAR利用LIDAR数据作为结构骨架来指导对视觉和文本信息的处理,从而能够有效地应对遮挡问题及复杂的空间布局。我们的框架包含一个多尺度LIDAR变换器、跨模式引导注意力机制以及一个自适应融合模块,这些组件能够有效整合不同语义层次上的多模态数据。LiGAR的分层架构可以从个体动作到场景级别的动态捕捉群体活动的不同细节。在JRDB-PAR、Volleyball和NBA数据集上进行的广泛实验展示了LiGAR的优越性能,在JRDB-PAR数据集上的F1分数提高了高达10.6%,而在NBA数据集上的平均每类准确率提升了5.9%。值得注意的是,即使在推理过程中没有LIDAR数据时,LiGAR仍然能够保持高性能,这展示了它的适应性。我们的消融研究强调了各组件的重要贡献,并证明了我们多模态、多尺度方法的有效性,在推进群体活动识别领域方面取得了显著进展。

URL

https://arxiv.org/abs/2410.21108

PDF

https://arxiv.org/pdf/2410.21108.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot