Paper Reading AI Learner

Training Quantized Network with Auxiliary Gradient Module

2019-03-27 03:42:31
Bohan Zhuang, Lingqiao Liu, Mingkui Tan, Chunhua Shen, Ian Reid

Abstract

In this paper, we seek to tackle two challenges in training low-precision networks: 1) the notorious difficulty in propagating gradient through a low-precision network due to the non-differentiable quantization function; 2) the requirement of a full-precision realization of skip connections in residual type network architectures. During training, we introduce an auxiliary gradient module which mimics the effect of skip connections to assist the optimization. We then expand the original low-precision network with the full-precision auxiliary gradient module to formulate a mixed-precision residual network and optimize it jointly with the low-precision model using weight sharing and separate batch normalization. This strategy ensures that the gradient back-propagates more easily, thus alleviating a major difficulty in training low-precision networks. Moreover, we find that when training a low-precision plain network with our method, the plain network can achieve performance similar to its counterpart with residual skip connections; i.e. the plain network without floating-point skip connections is just as effective to deploy at inference time. To further promote the gradient flow during backpropagation, we then employ a stochastic structured precision strategy to stochastically sample and quantize sub-networks while keeping other parts full-precision. We evaluate the proposed method on the image classification task over various quantization approaches and show consistent performance increases.

Abstract (translated)

本文试图解决低精度网络训练中的两个难题:1)由于不可微量化函数的存在,使得梯度在低精度网络中的传播变得非常困难;2)残差型网络结构中对跳接的全精度实现的要求。在训练过程中,我们引入一个辅助梯度模块来模拟跳跃连接的效果,以协助优化。然后利用全精度辅助梯度模块对原低精度网络进行扩展,形成混合精度残差网络,并结合低精度模型,采用权重分担和分批次归一化的方法对其进行优化。此策略确保梯度反向传播更容易,从而减轻了训练低精度网络的主要困难。此外,我们发现,在用我们的方法训练一个低精度的平面网络时,平面网络可以实现类似于其具有剩余跳跃连接的对应网络的性能,即没有浮点跳跃连接的平面网络在推理时也同样有效地部署。为了进一步提高反向传播过程中的梯度流,我们采用随机结构精度策略随机抽取和量化子网络,同时保持其他部分的全精度。我们评估了在不同量化方法下的图像分类任务的方法,并显示出一致的性能提高。

URL

https://arxiv.org/abs/1903.11236

PDF

https://arxiv.org/pdf/1903.11236.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot