Paper Reading AI Learner

Network Pruning via Transformable Architecture Search

2019-05-23 15:22:25
Xuanyi Dong, Yi Yang

Abstract

Network pruning reduces the computation costs of an over-parameterized network without performance damage. Prevailing pruning algorithms pre-define the width and depth of the pruned networks, and then transfer parameters from the unpruned network to pruned networks. To break the structure limitation of the pruned networks, we propose to apply neural architecture search to search directly for a network with flexible channel and layer sizes. The number of the channels/layers is learned by minimizing the loss of the pruned networks. The feature map of the pruned network is an aggregation of K feature map fragments (generated by K networks of different sizes), which are sampled based on the probability distribution.The loss can be back-propagated not only to the network weights, but also to the parameterized distribution to explicitly tune the size of the channels/layers. Specifically, we apply channel-wise interpolation to keep the feature map with different channel sizes aligned in the aggregation procedure. The maximum probability for the size in each distribution serves as the width and depth of the pruned network, whose parameters are learned by knowledge transfer, e.g., knowledge distillation, from the original networks. Experiments on CIFAR-10, CIFAR-100 and ImageNet demonstrate the effectiveness of our new perspective of network pruning compared to traditional network pruning algorithms. Various searching and knowledge transfer approaches are conducted to show the effectiveness of the two components.

Abstract (translated)

网络修剪减少了过度参数化网络的计算成本,而不会对性能造成损害。常用的剪枝算法预先定义了剪枝网络的宽度和深度,然后将参数从未剪枝网络传输到剪枝网络。为了突破剪枝网络的结构局限性,提出了应用神经网络结构搜索直接搜索具有灵活信道和层大小的网络。通道/层的数量通过最小化修剪网络的损失来学习。剪枝网络的特征图是k个特征图片段的集合(由不同大小的k个网络生成),这些片段是根据概率分布进行采样的。损失不仅可以反向传播到网络权重,还可以反向传播到参数化分布,以显式地调整通道/层的大小。具体地说,我们应用信道插值来保持聚集过程中具有不同信道大小的特征映射对齐。每个分布中大小的最大概率用作修剪网络的宽度和深度,修剪网络的参数通过原始网络中的知识转移(例如,知识蒸馏)获得。在cifar-10、cifar-100和imagenet上进行的实验表明,与传统的网络修剪算法相比,我们的网络修剪新视角的有效性。采用多种搜索和知识转移方法,验证了这两种方法的有效性。

URL

https://arxiv.org/abs/1905.09717

PDF

https://arxiv.org/pdf/1905.09717.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot