Abstract
This paper addresses spatio-temporal localization of human actions in video. In order to localize actions in time, we propose a recurrent localization network (RecLNet) designed to model the temporal structure of actions on the level of person tracks. Our model is trained to simultaneously recognize and localize action classes in time and is based on two layer gated recurrent units (GRU) applied separately to two streams, i.e. appearance and optical flow streams. When used together with state-of-the-art person detection and tracking, our model is shown to improve substantially spatio-temporal action localization in videos. The gain is shown to be mainly due to improved temporal localization. We evaluate our method on two recent datasets for spatio-temporal action localization, UCF101-24 and DALY, demonstrating a significant improvement of the state of the art.
Abstract (translated)
本文介绍了视频中人类行为的时空定位。为了及时本地化行动,我们提出了一个经常性的本地化网络(RecLNet),该网络被设计用来模拟人类轨迹层面的行为的时间结构。我们的模型经过训练,能够及时识别和定位动作类别,并基于两层门控循环单元(GRU)分别应用于两个流,即外观和光流。当与最先进的人员检测和跟踪一起使用时,我们的模型显示可以大大改善视频中的大幅时空动作定位。显示增益主要是由于时间局部化改善。我们在两个最新的时空动作定位数据集UCF101-24和DALY上评估我们的方法,展示了现有技术的显着改进。
URL
https://arxiv.org/abs/1806.11008