Abstract
Shift operation is an efficient alternative over depthwise separable convolution. However, it is still bottlenecked by its implementation manner, namely memory movement. To put this direction forward, a new and novel basic component named Sparse Shift Layer (SSL) is introduced in this paper to construct efficient convolutional neural networks. In this family of architectures, the basic block is only composed by 1x1 convolutional layers with only a few shift operations applied to the intermediate feature maps. To make this idea feasible, we introduce shift operation penalty during optimization and further propose a quantization-aware shift learning method to impose the learned displacement more friendly for inference. Extensive ablation studies indicate that only a few shift operations are sufficient to provide spatial information communication. Furthermore, to maximize the role of SSL, we redesign an improved network architecture to Fully Exploit the limited capacity of neural Network (FE-Net). Equipped with SSL, this network can achieve 75.0% top-1 accuracy on ImageNet with only 563M M-Adds. It surpasses other counterparts constructed by depthwise separable convolution and the networks searched by NAS in terms of accuracy and practical speed.
Abstract (translated)
移位运算是克服深度可分卷积的一种有效方法。然而,它的实现方式,即内存移动,仍然是一个瓶颈。为了提出这一方向,本文引入了一种新的基本元件稀疏移位层(ssl)来构造高效的卷积神经网络。在这个体系结构家族中,基本块仅由1x1卷积层组成,中间特性图仅应用了少量移位操作。为了使这一思想可行,我们在优化过程中引入了移位运算惩罚,并进一步提出了一种量化感知移位学习方法,使学习到的移位更易于推理。广泛的消融研究表明,只有少数移位操作足以提供空间信息通信。此外,为了最大限度地发挥ssl的作用,我们重新设计了一种改进的网络结构,以充分利用有限的神经网络(fe-net)能力。该网络配备了SSL,在IMAGENET上仅添加563M M就可以达到75.0%的顶级精度。它在精度和实际速度上都优于其它由非纵向可分离卷积构造的网络和由NAS搜索的网络。
URL
https://arxiv.org/abs/1903.05285