Paper Reading AI Learner

FineParser: A Fine-grained Spatio-temporal Action Parser for Human-centric Action Quality Assessment

2024-05-11 02:57:16
Jinglin Xu, Sibo Yin, Guohao Zhao, Zishuo Wang, Yuxin Peng

Abstract

Existing action quality assessment (AQA) methods mainly learn deep representations at the video level for scoring diverse actions. Due to the lack of a fine-grained understanding of actions in videos, they harshly suffer from low credibility and interpretability, thus insufficient for stringent applications, such as Olympic diving events. We argue that a fine-grained understanding of actions requires the model to perceive and parse actions in both time and space, which is also the key to the credibility and interpretability of the AQA technique. Based on this insight, we propose a new fine-grained spatial-temporal action parser named \textbf{FineParser}. It learns human-centric foreground action representations by focusing on target action regions within each frame and exploiting their fine-grained alignments in time and space to minimize the impact of invalid backgrounds during the assessment. In addition, we construct fine-grained annotations of human-centric foreground action masks for the FineDiving dataset, called \textbf{FineDiving-HM}. With refined annotations on diverse target action procedures, FineDiving-HM can promote the development of real-world AQA systems. Through extensive experiments, we demonstrate the effectiveness of FineParser, which outperforms state-of-the-art methods while supporting more tasks of fine-grained action understanding. Data and code are available at \url{this https URL}.

Abstract (translated)

现有的动作质量评估(AQA)方法主要通过视频级别学习对评分多样动作的深层表示。由于视频中对动作缺乏详细的理解,它们在可信度和可解释性方面严重不足,因此不够适用于对运动员进行评分等严格应用,如奥运会跳水比赛。我们认为,动作的详细理解需要模型在时间和空间中感知和解析动作,这也是AQA技术可信度和可解释性的关键。基于这一洞察,我们提出了一个名为FineParser的新细粒度空间-时间动作解析器。它通过专注于每个帧中的目标动作区域,并利用它们在时间和空间中的细粒度对齐,来最小化评估过程中无效背景的影响。此外,我们还为FineDiving数据集中的以人为中心的粗粒度动作掩码构建了细粒度注释,称为FineDiving-HM。通过为细粒度动作过程提供精细注释,FineDiving-HM可以推动现实世界AQA系统的开发。通过广泛的实验,我们证明了FineParser的有效性,它超越了最先进的方法,同时支持更多细粒度动作理解的更多任务。数据和代码可在此处访问:https://this URL。

URL

https://arxiv.org/abs/2405.06887

PDF

https://arxiv.org/pdf/2405.06887.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot