Paper Reading AI Learner

An Improved Graph Pooling Network for Skeleton-Based Action Recognition

2024-04-25 06:41:58
Cong Wu, Xiao-Jun Wu, Tianyang Xu, Josef Kittler

Abstract

Pooling is a crucial operation in computer vision, yet the unique structure of skeletons hinders the application of existing pooling strategies to skeleton graph modelling. In this paper, we propose an Improved Graph Pooling Network, referred to as IGPN. The main innovations include: Our method incorporates a region-awareness pooling strategy based on structural partitioning. The correlation matrix of the original feature is used to adaptively adjust the weight of information in different regions of the newly generated features, resulting in more flexible and effective processing. To prevent the irreversible loss of discriminative information, we propose a cross fusion module and an information supplement module to provide block-level and input-level information respectively. As a plug-and-play structure, the proposed operation can be seamlessly combined with existing GCN-based models. We conducted extensive evaluations on several challenging benchmarks, and the experimental results indicate the effectiveness of our proposed solutions. For example, in the cross-subject evaluation of the NTU-RGB+D 60 dataset, IGPN achieves a significant improvement in accuracy compared to the baseline while reducing Flops by nearly 70%; a heavier version has also been introduced to further boost accuracy.

Abstract (translated)

池化是计算机视觉中一个关键的操作,然而骨架图模型的独特结构使得现有的池化策略难以应用于骨架图建模。在本文中,我们提出了一个改进的图池化网络,被称为IGPN。主要的创新包括:我们的方法基于结构分割的局部感知池化策略。原始特征的协方差矩阵用于根据新特征不同区域的信息自适应地调整其权重,从而实现更加灵活和有效的处理。为了防止不可逆的信息损失,我们提出了跨融合模块和信息补充模块,分别为块级和输入级提供信息。作为可插拔的结构,与现有的基于图卷积网络(GCN)的模型无缝结合。我们对多个具有挑战性的基准进行了广泛的评估,实验结果表明,我们的解决方案的有效性得到了验证。例如,在NTU-RGB+D 60数据集的跨subject评估中,IGPN在准确率方面比基线提高了显著的差异,同时减少了Flops近70%;还推出了一种更重的版本,以进一步提高准确性。

URL

https://arxiv.org/abs/2404.16359

PDF

https://arxiv.org/pdf/2404.16359.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot