Abstract
We introduce a data-free quantization method for deep neural networks that does not require fine-tuning or hyperparameter selection. It achieves near-original model performance on common computer vision architectures and tasks. 8-bit fixed-point quantization is essential for efficient inference in modern deep learning hardware architectures. However, quantizing models to run in 8-bit is a non-trivial task, frequently leading to either significant performance reduction or engineering time spent on training a network to be amenable to quantization. Our approach relies on equalizing the weight ranges in the network by making use of a scale-equivariance property of activation functions. In addition the method corrects biases in the error that are introduced during quantization. This improves quantization accuracy performance, and can be applied ubiquitously to almost any model with a straight-forward API call. For common architectures, such as the MobileNet family, we achieve state-of-the-art quantized model performance. We further show that the method also extends to other computer vision architectures and tasks such as semantic segmentation and object detection.
Abstract (translated)
本文介绍了一种无数据量化方法,用于不需要微调或超参数选择的深神经网络。它在通用的计算机视觉体系结构和任务上实现了接近原始的模型性能。在现代深度学习硬件体系结构中,8位定点量化是有效推理的基础。然而,量化模型以8位运行是一项不平凡的任务,经常导致显著的性能降低或工程时间花费在训练网络以适应量化。我们的方法依赖于利用激活函数的尺度等方差特性来均衡网络中的权重范围。此外,该方法修正了量化过程中引入的误差偏差。这提高了量化精度的性能,并且可以广泛应用于几乎任何具有直接向前API调用的模型。对于常见的体系结构,例如mobilenet系列,我们实现了最先进的量化模型性能。我们进一步证明,该方法还扩展到其他计算机视觉体系结构和任务,如语义分割和目标检测。
URL
https://arxiv.org/abs/1906.04721