Abstract
We propose a method of training quantization clipping thresholds for uniform symmetric quantizers using standard backpropagation and gradient descent. Our quantizers are constrained to use power-of-2 scale-factors and per-tensor scaling for weights and activations. These constraints make our methods better suited for hardware implementations. Training with these difficult constraints is enabled by a combination of three techniques: using accurate threshold gradients to achieve range-precision trade-off, training thresholds in log-domain, and training with an adaptive gradient optimizer. We refer to this collection of techniques as Adaptive-Gradient Log-domain Threshold Training (ALT). We present analytical support for the general robustness of our methods and empirically validate them on various CNNs for ImageNet classification. We are able to achieve floating-point or near-floating-point accuracy on traditionally difficult networks such as MobileNets in less than 5 epochs of quantized (8-bit) retraining. Finally, we present Graffitist, a framework that enables immediate quantization of TensorFlow graphs using our methods. Code available at https://github.com/Xilinx/graffitist .
Abstract (translated)
提出了一种利用标准反向传播和梯度下降训练均匀对称量化器量化削波阈值的方法。我们的量化器被限制使用2次幂因子和每张量标度的权重和激活。这些约束使我们的方法更适合硬件实现。利用这些困难的约束进行训练可以通过以下三种技术的组合来实现:使用精确的阈值梯度来实现范围精度权衡,在对数域中训练阈值,以及使用自适应梯度优化器进行训练。我们将这些技术集合称为自适应梯度对数域阈值训练(alt)。我们对我们的方法的一般稳健性提供了分析支持,并在各种CNN上对其进行了经验验证,以用于图像网分类。我们能够在传统的困难网络上实现浮点或接近浮点的精度,例如在不到5个量化(8位)再培训阶段的mobilenet。最后,我们提出了涂鸦,一个框架,可以使用我们的方法即时量化张量流图。代码可在https://github.com/xilinx/grafitist上找到。
URL
https://arxiv.org/abs/1903.08066