Abstract
Human actions often involve complex interactions across several inter-related objects in the scene. However, existing approaches to fine-grained video understanding or visual relationship detection often rely on single object representation or pairwise object relationships. Furthermore, learning interactions across multiple objects in hundreds of frames for video is computationally infeasible and performance may suffer since a large combinatorial space has to be modeled. In this paper, we propose to efficiently learn higher-order interactions between arbitrary subgroups of objects for fine-grained video understanding. We demonstrate that modeling object interactions significantly improves accuracy for both action recognition and video captioning, while saving more than 3-times the computation over traditional pairwise relationships. The proposed method is validated on two large-scale datasets: Kinetics and ActivityNet Captions. Our SINet and SINet-Caption achieve state-of-the-art performances on both datasets even though the videos are sampled at a maximum of 1 FPS. To the best of our knowledge, this is the first work modeling object interactions on open domain large-scale video datasets, and we additionally model higher-order object interactions which improves the performance with low computational costs.
Abstract (translated)
人类行为通常涉及场景中多个相关对象之间的复杂交互。然而,现有的细粒度视频理解或视觉关系检测方法通常依赖于单个对象表示或成对对象关系。此外,对于视频而言,在数百帧中的多个对象之间学习交互在计算上是不可行的,并且由于必须对大的组合空间进行建模,因此性能可能受损。在本文中,我们建议有效地学习任意子对象之间的高阶交互以实现细粒度的视频理解。我们证明建模对象交互可显着提高动作识别和视频字幕的准确性,同时比传统的成对关系节省3倍以上的计算量。所提出的方法在两个大型数据集上进行验证:动力学和ActivityNet标题。我们的SINet和SINet-Caption在两个数据集上实现了最先进的性能,即使视频采样速率最高为1 FPS。就我们所知,这是开放域大规模视频数据集上的第一个工作建模对象交互,并且我们还建模了高阶对象交互,从而以较低的计算成本提高了性能。
URL
https://arxiv.org/abs/1711.06330