Abstract
Convolutional Neural Networks (CNNs) have a large number of parameters and take significantly large hardware resources to compute, so edge devices struggle to run high-level networks. This paper proposes a novel method to reduce the parameters and FLOPs for computational efficiency in deep learning models. We introduce accuracy and efficiency coefficients to control the trade-off between the accuracy of the network and its computing efficiency. The proposed Rewarded meta-pruning algorithm trains a network to generate weights for a pruned model chosen based on the approximate parameters of the final model by controlling the interactions using a reward function. The reward function allows more control over the metrics of the final pruned model. Extensive experiments demonstrate superior performances of the proposed method over the state-of-the-art methods in pruning ResNet-50, MobileNetV1, and MobileNetV2 networks.
Abstract (translated)
卷积神经网络(CNN)具有大量的参数,需要大量的硬件资源进行计算,因此边缘设备很难运行高层次的网络。本文提出了一种新方法,用于降低深度学习模型中的参数和FLOPs,以提高计算效率。我们引入了精度和效率系数,以控制网络的精度和计算效率之间的权衡。我们提出了一种奖励性元学习算法,用于训练网络生成被选择是基于最终模型近似参数的修剪模型的权重,通过使用奖励函数来控制交互。奖励函数可以更好地控制最终的修剪模型的度量指标。广泛的实验证据表明, proposed 方法在修剪ResNet-50、MobileNetV1和MobileNetV2网络中比当前最佳方法表现出更好的性能。
URL
https://arxiv.org/abs/2301.11063