Abstract
Spatio-temporal representations in frame sequences play an important role in the task of action recognition. Previously, a method of using optical flow as a temporal information in combination with a set of RGB images that contain spatial information has shown great performance enhancement in the action recognition tasks. However, it has an expensive computational cost and requires two-stream (RGB and optical flow) framework. In this paper, we propose MFNet (Motion Feature Network) containing motion blocks which make it possible to encode spatio-temporal information between adjacent frames in a unified network that can be trained end-to-end. The motion block can be attached to any existing CNN-based action recognition frameworks with only a small additional cost. We evaluated our network on two of the action recognition datasets (Jester and Something-Something) and achieved competitive performances for both datasets by training the networks from scratch.
Abstract (translated)
帧序列中的时空表示在动作识别任务中起着重要作用。以前,将光流作为时间信息与包含空间信息的一组RGB图像组合使用的方法已经在动作识别任务中显示出极大的性能增强。然而,它具有昂贵的计算成本并且需要双流(RGB和光流)框架。在本文中,我们提出了包含运动块的MFNet(运动特征网络),该运动块使得可以在可以端到端训练的统一网络中的相邻帧之间编码时空信息。运动块可以附加到任何现有的基于CNN的动作识别框架,只需很少的额外成本。我们在两个动作识别数据集(Jester和Something-Something)上评估了我们的网络,并通过从头开始训练网络,为两个数据集实现了竞争性能。
URL
https://arxiv.org/abs/1807.10037