Paper Reading AI Learner

ADM-Loc: Actionness Distribution Modeling for Point-supervised Temporal Action Localization

2023-11-27 15:24:54
Elahe Vahdani, Yingli Tian

Abstract

This paper addresses the challenge of point-supervised temporal action detection, in which only one frame per action instance is annotated in the training set. Self-training aims to provide supplementary supervision for the training process by generating pseudo-labels (action proposals) from a base model. However, most current methods generate action proposals by applying manually designed thresholds to action classification probabilities and treating adjacent snippets as independent entities. As a result, these methods struggle to generate complete action proposals, exhibit sensitivity to fluctuations in action classification scores, and generate redundant and overlapping action proposals. This paper proposes a novel framework termed ADM-Loc, which stands for Actionness Distribution Modeling for point-supervised action Localization. ADM-Loc generates action proposals by fitting a composite distribution, comprising both Gaussian and uniform distributions, to the action classification signals. This fitting process is tailored to each action class present in the video and is applied separately for each action instance, ensuring the distinctiveness of their distributions. ADM-Loc significantly enhances the alignment between the generated action proposals and ground-truth action instances and offers high-quality pseudo-labels for self-training. Moreover, to model action boundary snippets, it enforces consistency in action classification scores during training by employing Gaussian kernels, supervised with the proposed loss functions. ADM-Loc outperforms the state-of-the-art point-supervised methods on THUMOS14 and ActivityNet-v1.2 datasets.

Abstract (translated)

本文解决了基于点标注的时间动作检测(在训练集中只有一个动作实例被标注)的挑战。自训练旨在为训练过程提供额外的监督,通过从基础模型中生成伪标签(动作建议)来提供这种监督。然而,大多数现有方法通过手动设计阈值对动作分类概率应用,并将相邻片段视为独立实体。因此,这些方法很难生成完整的动作建议,对动作分类分数的波动敏感,并生成冗余和重叠的动作建议。本文提出了一种新框架,称为ADM-Loc,意为点标注动作局部定位模型。ADM-Loc通过将高斯分布和均匀分布的组合拟合到动作分类信号上生成动作建议。拟合过程针对视频中的每个动作类别进行定制,并对每个动作实例分别应用。这确保了它们的分布的独立性。ADM-Loc显著增强了生成的动作建议与真实动作实例之间的对齐程度,并为自训练提供了高质量的伪标签。此外,为了建模动作边界片段,它在训练过程中通过使用高斯核来维持动作分类分数的一致性,并监督采用提出的损失函数。ADM-Loc在THUMOS14和ActivityNet-v1.2数据集上优于最先进的基于点标注的方法。

URL

https://arxiv.org/abs/2311.15916

PDF

https://arxiv.org/pdf/2311.15916.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot