Paper Reading AI Learner

Spatio-Temporal Action Localization in a Weakly Supervised Setting

2019-05-06 17:39:09
Kurt Degiorgio, Fabio Cuzzolin

Abstract

Enabling computational systems with the ability to localize actions in video-based content has manifold applications. Traditionally, such a problem is approached in a fully-supervised setting where video-clips with complete frame-by-frame annotations around the actions of interest are provided for training. However, the data requirements needed to achieve adequate generalization in this setting is prohibitive. In this work, we circumvent this issue by casting the problem in a weakly supervised setting, i.e., by considering videos as labelled `sets' of unlabelled video segments. Firstly, we apply unsupervised segmentation to take advantage of the elementary structure of each video. Subsequently, a convolutional neural network is used to extract RGB features from the resulting video segments. Finally, Multiple Instance Learning (MIL) is employed to predict labels at the video segment level, thus inherently performing spatio-temporal action detection. In contrast to previous work, we make use of a different MIL formulation in which the label of each video segment is continuous rather then discrete, making the resulting optimization function tractable. Additionally, we utilize a set splitting technique for regularization. Experimental results considering multiple performance indicators on the UCF-Sports data-set support the effectiveness of our approach.

Abstract (translated)

使计算系统能够在基于视频的内容中本地化操作,具有多种应用程序。传统上,这种问题是在一个完全监控的环境中解决的,在该环境中,围绕感兴趣的动作提供完整的逐帧注释的视频剪辑以供培训。但是,在这种情况下,要实现适当的泛化,所需的数据要求是禁止的。在这项工作中,我们通过将问题投射到一个弱监督的设置中来绕过这个问题,即,通过将视频视为未标记视频段的标记“集”。首先,我们应用无监督分割来利用每个视频的基本结构。随后,使用卷积神经网络从产生的视频片段中提取RGB特征。最后,利用多实例学习(mil)来预测视频段级别的标签,从而固有地执行时空动作检测。与之前的工作相比,我们使用了不同的mil公式,其中每个视频段的标签是连续的,而不是离散的,从而使产生的优化功能易于处理。此外,我们还利用集分裂技术来进行正则化。在UCF运动数据集上考虑多个性能指标的实验结果支持我们的方法的有效性。

URL

https://arxiv.org/abs/1905.02171

PDF

https://arxiv.org/pdf/1905.02171.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot