Paper Reading AI Learner

Proposal-based Temporal Action Localization with Point-level Supervision

2023-10-09 08:27:05
Yuan Yin, Yifei Huang, Ryosuke Furuta, Yoichi Sato

Abstract

Point-level supervised temporal action localization (PTAL) aims at recognizing and localizing actions in untrimmed videos where only a single point (frame) within every action instance is annotated in training data. Without temporal annotations, most previous works adopt the multiple instance learning (MIL) framework, where the input video is segmented into non-overlapped short snippets, and action classification is performed independently on every short snippet. We argue that the MIL framework is suboptimal for PTAL because it operates on separated short snippets that contain limited temporal information. Therefore, the classifier only focuses on several easy-to-distinguish snippets instead of discovering the whole action instance without missing any relevant snippets. To alleviate this problem, we propose a novel method that localizes actions by generating and evaluating action proposals of flexible duration that involve more comprehensive temporal information. Moreover, we introduce an efficient clustering algorithm to efficiently generate dense pseudo labels that provide stronger supervision, and a fine-grained contrastive loss to further refine the quality of pseudo labels. Experiments show that our proposed method achieves competitive or superior performance to the state-of-the-art methods and some fully-supervised methods on four benchmarks: ActivityNet 1.3, THUMOS 14, GTEA, and BEOID datasets.

Abstract (translated)

点级监督下的时序动作局部定位(PTAL)旨在识别和局部化未剪辑视频中的动作,其中在训练数据中仅有一个点(帧)被标注。如果没有时序注释,大多数先前的 works 采用多实例学习(MIL)框架,其中输入视频被分割成非重叠的短片段,并对每个短片段进行独立的动作分类。我们认为,MIL 框架对于 PTAL 来说是不最优的,因为它在包含有限时序信息的分离短片段上操作。因此,分类器仅关注几个容易区分的片段,而不是发现没有丢失任何相关片段的完整动作实例。为了减轻这个问题,我们提出了一种新方法,通过生成和评估具有更全面时序信息的灵活时序建议来局部化动作。此外,我们还引入了高效的聚类算法来生成密集伪标签,以提供更强监督,以及细粒度的对比损失来进一步提高伪标签的质量。实验证明,我们在四个基准数据集(ActivityNet 1.3、THUMOS 14、GTEA 和 BEOID)上的方法实现了与最先进方法或完全监督方法竞争或卓越的表现。

URL

https://arxiv.org/abs/2310.05511

PDF

https://arxiv.org/pdf/2310.05511.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot