Paper Reading AI Learner

Learning Low Precision Deep Neural Networks through Regularization

2018-09-01 01:28:21
Yoojin Choi, Mostafa El-Khamy, Jungwon Lee

Abstract

We consider the quantization of deep neural networks (DNNs) to produce low-precision models for efficient inference of fixed-point operations. Compared to previous approaches to training quantized DNNs directly under the constraints of low-precision weights and activations, we learn the quantization of DNNs with minimal quantization loss through regularization. In particular, we introduce the learnable regularization coefficient to find accurate low-precision models efficiently in training. In our experiments, the proposed scheme yields the state-of-the-art low-precision models of AlexNet and ResNet-18, which have better accuracy than their previously available low-precision models. We also examine our quantization method to produce low-precision DNNs for image super resolution. We observe only $0.5$~dB peak signal-to-noise ratio (PSNR) loss when using binary weights and 8-bit activations. The proposed scheme can be used to train low-precision models from scratch or to fine-tune a well-trained high-precision model to converge to a low-precision model. Finally, we discuss how a similar regularization method can be adopted in DNN weight pruning and compression, and show that $401\times$ compression is achieved for LeNet-5.

Abstract (translated)

我们考虑深度神经网络(DNN)的量化,以产生低精度模型,以有效推断定点运算。与先前在低精度权重和激活的约束下直接训练量化DNN的方法相比,我们通过正则化学习了具有最小量化损失的DNN的量化。特别是,我们引入了可学习的正则化系数,以便在训练中有效地找到准确的低精度模型。在我们的实验中,所提出的方案产生了AlexNet和ResNet-18的最先进的低精度模型,其具有比其先前可用的低精度模型更好的精度。我们还研究了我们的量化方法,以产生用于图像超分辨率的低精度DNN。当使用二进制加权和8位激活时,我们观察到仅0.5美元〜dB的峰值信噪比(PSNR)损失。所提出的方案可用于从头开始训练低精度模型或微调经过良好训练的高精度模型以收敛到低精度模型。最后,我们讨论了如何在DNN权重修剪和压缩中采用类似的正则化方法,并表明LeNet-5实现了$ 401 \倍$压缩。

URL

https://arxiv.org/abs/1809.00095

PDF

https://arxiv.org/pdf/1809.00095.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot