Abstract
Weight quantization is one of the most important techniques of Deep Neural Networks (DNNs) model compression method. A recent work using systematic framework of DNN weight quantization with the advanced optimization algorithm ADMM (Alternating Direction Methods of Multipliers) achieves one of state-of-art results in weight quantization. In this work, we first extend such ADMM-based framework to guarantee solution feasibility and we have further developed a multi-step, progressive DNN weight quantization framework, with dual benefits of (i) achieving further weight quantization thanks to the special property of ADMM regularization, and (ii) reducing the search space within each step. Extensive experimental results demonstrate the superior performance compared with prior work. Some highlights: we derive the first lossless and fully binarized (for all layers) LeNet-5 for MNIST; And we derive the first fully binarized (for all layers) VGG-16 for CIFAR-10 and ResNet for ImageNet with reasonable accuracy loss.
Abstract (translated)
权重量化是深神经网络模型压缩方法的重要技术之一。最近的一项研究使用了DNN权值量化的系统框架和先进的优化算法ADMM(乘法器的交替方向法)来实现权值量化的最新成果之一。在这项工作中,我们首先扩展了基于ADMM的框架以保证解决方案的可行性,并且我们进一步开发了一个多步骤、渐进式的DNN权重量化框架,它的双重好处是:(i)由于ADMM正则化的特殊性而实现了进一步的权重量化;(ii)减少了每个ST中的搜索空间。EP。大量的实验结果表明,与以往的工作相比,其性能优越。一些亮点:我们为mnist导出了第一个无损和完全二值化(所有层)的lenet-5;为cifar-10导出了第一个完全二值化(所有层)的vgg-16,为imagenet导出了具有合理精度损失的resnet。
URL
https://arxiv.org/abs/1905.00789