Abstract
Learning to represent videos is a very challenging task both algorithmically and computationally. Standard video CNN architectures have been designed by directly extending architectures devised for image understanding to a third dimension (using a limited number of space-time modules such as 3D convolutions) or by introducing a handcrafted two-stream design to capture both appearance and motion in videos. We interpret a video CNN as a collection of multi-stream space-time convolutional blocks connected to each other, and propose the approach of automatically finding neural architectures with better connectivity for video understanding. This is done by evolving a population of overly-connected architectures guided by connection weight learning. Architectures combining representations that abstract different input types (i.e., RGB and optical flow) at multiple temporal resolutions are searched for, allowing different types or sources of information to interact with each other. Our method, referred to as AssembleNet, outperforms prior approaches on public video datasets, in some cases by a great margin.
Abstract (translated)
学习视频的表现是一个非常具有挑战性的任务,无论是在算法上还是在计算上。标准视频CNN架构的设计是通过直接将用于图像理解的架构扩展到第三维度(使用有限数量的时空模块,如3D卷积)或引入手工制作的双流设计来捕获视频中的外观和运动。我们将一个视频CNN解释为一组相互连接的多流时空卷积块,并提出了一种自动寻找具有更好连通性的神经结构的方法,以便于视频理解。这是通过在连接权重学习的指导下,对大量过度连接的体系结构进行改进来实现的。搜索在多个时间分辨率下抽象不同输入类型(即RGB和光流)的组合表示的体系结构,允许不同类型或信息源相互作用。我们的方法,被称为汇编网,在某些情况下,在公共视频数据集上优于以前的方法。
URL
https://arxiv.org/abs/1905.13209