Abstract
Group Activity Recognition (GAR) remains challenging in computer vision due to the complex nature of multi-agent interactions. This paper introduces LiGAR, a LIDAR-Guided Hierarchical Transformer for Multi-Modal Group Activity Recognition. LiGAR leverages LiDAR data as a structural backbone to guide the processing of visual and textual information, enabling robust handling of occlusions and complex spatial arrangements. Our framework incorporates a Multi-Scale LIDAR Transformer, Cross-Modal Guided Attention, and an Adaptive Fusion Module to integrate multi-modal data at different semantic levels effectively. LiGAR's hierarchical architecture captures group activities at various granularities, from individual actions to scene-level dynamics. Extensive experiments on the JRDB-PAR, Volleyball, and NBA datasets demonstrate LiGAR's superior performance, achieving state-of-the-art results with improvements of up to 10.6% in F1-score on JRDB-PAR and 5.9% in Mean Per Class Accuracy on the NBA dataset. Notably, LiGAR maintains high performance even when LiDAR data is unavailable during inference, showcasing its adaptability. Our ablation studies highlight the significant contributions of each component and the effectiveness of our multi-modal, multi-scale approach in advancing the field of group activity recognition.
Abstract (translated)
群体活动识别(GAR)在计算机视觉领域仍然是一项挑战,这主要是因为多智能体交互的复杂性。本文介绍了LiGAR,这是一种基于LIDAR引导的分层转换器,用于多模态群体活动识别。LiGAR利用LIDAR数据作为结构骨架来指导对视觉和文本信息的处理,从而能够有效地应对遮挡问题及复杂的空间布局。我们的框架包含一个多尺度LIDAR变换器、跨模式引导注意力机制以及一个自适应融合模块,这些组件能够有效整合不同语义层次上的多模态数据。LiGAR的分层架构可以从个体动作到场景级别的动态捕捉群体活动的不同细节。在JRDB-PAR、Volleyball和NBA数据集上进行的广泛实验展示了LiGAR的优越性能,在JRDB-PAR数据集上的F1分数提高了高达10.6%,而在NBA数据集上的平均每类准确率提升了5.9%。值得注意的是,即使在推理过程中没有LIDAR数据时,LiGAR仍然能够保持高性能,这展示了它的适应性。我们的消融研究强调了各组件的重要贡献,并证明了我们多模态、多尺度方法的有效性,在推进群体活动识别领域方面取得了显著进展。
URL
https://arxiv.org/abs/2410.21108