Abstract
We introduce a class of causal video understanding models that aims to improve efficiency of video processing by maximising throughput, minimising latency, and reducing the number of clock cycles. Leveraging operation pipelining and multi-rate clocks, these models perform a minimal amount of computation (e.g. as few as four convolutional layers) for each frame per timestep to produce an output. The models are still very deep, with dozens of such operations being performed but in a pipelined fashion that enables depth-parallel computation. We illustrate the proposed principles by applying them to existing image architectures and analyse their behaviour on two video tasks: action recognition and human keypoint localisation. The results show that a significant degree of parallelism, and implicitly speedup, can be achieved with little loss in performance.
Abstract (translated)
我们介绍了一类因果视频理解模型,旨在通过最大化吞吐量,最小化延迟和减少时钟周期数来提高视频处理效率。利用操作流水线和多速率时钟,这些模型每时间步执行一次最小量的计算(例如,少至四个卷积层)以产生输出。模型仍然非常深,已经执行了数十个这样的操作,但是采用流水线方式,可以进行深度并行计算。我们通过将它们应用于现有的图像架构并分析它们在两个视频任务上的行为来说明所提出的原则:动作识别和人类关键点定位。结果表明,可以在几乎没有性能损失的情况下实现显着程度的并行性和隐式加速。
URL
https://arxiv.org/abs/1806.03863