Abstract
We propose a method for human action recognition, one that can localize the spatiotemporal regions that `define' the actions. This is a challenging task due to the subtlety of human actions in video and the co-occurrence of contextual elements. To address this challenge, we utilize conjugate samples of human actions, which are video clips that are contextually similar to human action samples but do not contain the action. We introduce a novel attentional mechanism that can spatially and temporally separate human actions from the co-occurring contextual factors. The separation of the action and context factors is weakly supervised, eliminating the need for laboriously detailed annotation of these two factors in training samples. Our method can be used to build human action classifiers with higher accuracy and better interpretability. Experiments on several human action recognition datasets demonstrate the quantitative and qualitative benefits of our approach.
Abstract (translated)
我们提出了一种人类行为识别的方法,它可以定位“定义”行为的时空区域。这是一项具有挑战性的任务,因为视频中人类行为的微妙性和上下文元素的共存。为了解决这个挑战,我们使用了人类行为的变形样本,这是一个与人类行为样本相似但不包含动作的视频剪辑。我们引入了一种新的注意机制,它可以在空间和时间上将人类行为与共同发生的情境因素分开。行动因素和背景因素的分离缺乏监督,无需在培训样本中费力地详细说明这两个因素。我们的方法可以用来建立更高精度和更好的解释性的人类行为分类器。对多个人类行为识别数据集的实验证明了我们方法的定量和定性优势。
URL
https://arxiv.org/abs/1904.05410