Abstract
Enabling computational systems with the ability to localize actions in video-based content has manifold applications. Traditionally, such a problem is approached in a fully-supervised setting where video-clips with complete frame-by-frame annotations around the actions of interest are provided for training. However, the data requirements needed to achieve adequate generalization in this setting is prohibitive. In this work, we circumvent this issue by casting the problem in a weakly supervised setting, i.e., by considering videos as labelled `sets' of unlabelled video segments. Firstly, we apply unsupervised segmentation to take advantage of the elementary structure of each video. Subsequently, a convolutional neural network is used to extract RGB features from the resulting video segments. Finally, Multiple Instance Learning (MIL) is employed to predict labels at the video segment level, thus inherently performing spatio-temporal action detection. In contrast to previous work, we make use of a different MIL formulation in which the label of each video segment is continuous rather then discrete, making the resulting optimization function tractable. Additionally, we utilize a set splitting technique for regularization. Experimental results considering multiple performance indicators on the UCF-Sports data-set support the effectiveness of our approach.
Abstract (translated)
使计算系统能够在基于视频的内容中本地化操作,具有多种应用程序。传统上,这种问题是在一个完全监控的环境中解决的,在该环境中,围绕感兴趣的动作提供完整的逐帧注释的视频剪辑以供培训。但是,在这种情况下,要实现适当的泛化,所需的数据要求是禁止的。在这项工作中,我们通过将问题投射到一个弱监督的设置中来绕过这个问题,即,通过将视频视为未标记视频段的标记“集”。首先,我们应用无监督分割来利用每个视频的基本结构。随后,使用卷积神经网络从产生的视频片段中提取RGB特征。最后,利用多实例学习(mil)来预测视频段级别的标签,从而固有地执行时空动作检测。与之前的工作相比,我们使用了不同的mil公式,其中每个视频段的标签是连续的,而不是离散的,从而使产生的优化功能易于处理。此外,我们还利用集分裂技术来进行正则化。在UCF运动数据集上考虑多个性能指标的实验结果支持我们的方法的有效性。
URL
https://arxiv.org/abs/1905.02171