Abstract
Recent works show that reducing the number of layers in a convolutional neural network can enhance efficiency while maintaining the performance of the network. Existing depth compression methods remove redundant non-linear activation functions and merge the consecutive convolution layers into a single layer. However, these methods suffer from a critical drawback; the kernel size of the merged layers becomes larger, significantly undermining the latency reduction gained from reducing the depth of the network. We show that this problem can be addressed by jointly pruning convolution layers and activation functions. To this end, we propose LayerMerge, a novel depth compression method that selects which activation layers and convolution layers to remove, to achieve a desired inference speed-up while minimizing performance loss. Since the corresponding selection problem involves an exponential search space, we formulate a novel surrogate optimization problem and efficiently solve it via dynamic programming. Empirical results demonstrate that our method consistently outperforms existing depth compression and layer pruning methods on various network architectures, both on image classification and generation tasks. We release the code at this https URL.
Abstract (translated)
近年来,减少卷积神经网络(CNN)层数可以提高效率,同时保持网络性能。现有的深度压缩方法移除了冗余的非线性激活函数,并将连续的卷积层合并为一个层。然而,这些方法存在一个关键的不足之处;合并后的层宽变得更大,导致网络延迟的降低幅度显著减小。我们证明了可以通过同时截断卷积层和激活函数来解决这个问题。为此,我们提出了LayerMerge,一种新颖的深度压缩方法,它选择要删除的激活层和卷积层,以实现所需的推理速度提升,同时最小化性能损失。由于相应的选择问题涉及指数搜索空间,我们提出了一个新的代理优化问题,并通过动态规划高效地解决了它。实验结果表明,我们的方法在各种网络架构上 consistently优于现有的深度压缩和层修剪方法,无论是图像分类还是生成任务。我们发布的代码位于此链接处:https://www.example.com。
URL
https://arxiv.org/abs/2406.12837