Paper Reading AI Learner

Matrix and tensor decompositions for training binary neural networks

2019-04-16 17:57:27
Adrian Bulat, Jean Kossaifi, Georgios Tzimiropoulos, Maja Pantic

Abstract

This paper is on improving the training of binary neural networks in which both activations and weights are binary. While prior methods for neural network binarization binarize each filter independently, we propose to instead parametrize the weight tensor of each layer using matrix or tensor decomposition. The binarization process is then performed using this latent parametrization, via a quantization function (e.g. sign function) applied to the reconstructed weights. A key feature of our method is that while the reconstruction is binarized, the computation in the latent factorized space is done in the real domain. This has several advantages: (i) the latent factorization enforces a coupling of the filters before binarization, which significantly improves the accuracy of the trained models. (ii) while at training time, the binary weights of each convolutional layer are parametrized using real-valued matrix or tensor decomposition, during inference we simply use the reconstructed (binary) weights. As a result, our method does not sacrifice any advantage of binary networks in terms of model compression and speeding-up inference. As a further contribution, instead of computing the binary weight scaling factors analytically, as in prior work, we propose to learn them discriminatively via back-propagation. Finally, we show that our approach significantly outperforms existing methods when tested on the challenging tasks of (a) human pose estimation (more than 4% improvements) and (b) ImageNet classification (up to 5% performance gains).

Abstract (translated)

本文对激活和权重都是二元的二元神经网络的训练方法进行了改进。在现有的神经网络二值化方法中,我们分别对每一个滤波器进行二值化的同时,我们提出用矩阵或张量分解来参数化每一层的权值张量。然后,通过将量化函数(例如符号函数)应用于重构权重,使用此潜在参数化来执行二值化过程。该方法的一个重要特点是,在重构二值化的同时,在实数域内进行潜在因子空间的计算。这有几个优点:(i)潜在的因子分解在二值化之前加强了滤波器的耦合,这显著提高了训练模型的精度。(ii)在训练时,使用实值矩阵或张量分解对每个卷积层的二进制权重进行参数化,在推理过程中我们只使用重构(二进制)权重。因此,我们的方法在模型压缩和加速推理方面没有牺牲二进制网络的任何优势。作为进一步的贡献,我们建议通过反向传播来区别地学习二元权重比例因子,而不是像先前的工作那样通过分析计算二元权重比例因子。最后,我们表明,在(a)人体姿势估计(超过4%的改进)和(b)图像网分类(高达5%的性能改进)的挑战性任务上,我们的方法明显优于现有的方法。

URL

https://arxiv.org/abs/1904.07852

PDF

https://arxiv.org/pdf/1904.07852.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot