Abstract
We propose differentiable quantization (DQ) for efficient deep neural network (DNN) inference where gradient descent is used to learn the quantizer's step size, dynamic range and bitwidth. Training with differentiable quantizers brings two main benefits: first, DQ does not introduce hyperparameters; second, we can learn for each layer a different step size, dynamic range and bitwidth. Our experiments show that DNNs with heterogeneous and learned bitwidth yield better performance than DNNs with a homogeneous one. Further, we show that there is one natural DQ parametrization especially well suited for training. We confirm our findings with experiments on CIFAR-10 and ImageNet and we obtain quantized DNNs with learned quantization parameters achieving state-of-the-art performance.
Abstract (translated)
提出了一种有效的深度神经网络(DNN)推理的可微量化(DQ),利用梯度下降法来学习量化器的步长、动态范围和比特宽度。用可微量化器进行训练有两个主要好处:第一,dq不引入超参数;第二,我们可以为每一层学习不同的步长、动态范围和比特宽度。实验表明,具有异构位宽和已知位宽的DNN比具有均匀位宽的DNN具有更好的性能。此外,我们发现有一个自然的DQ参数化特别适合于培训。我们通过在cifar-10和imagenet上的实验证实了我们的发现,并获得了量化的dnn,获得了量化参数,达到了最先进的性能。
URL
https://arxiv.org/abs/1905.11452