Abstract
In this paper, we seek to tackle two challenges in training low-precision networks: 1) the notorious difficulty in propagating gradient through a low-precision network due to the non-differentiable quantization function; 2) the requirement of a full-precision realization of skip connections in residual type network architectures. During training, we introduce an auxiliary gradient module which mimics the effect of skip connections to assist the optimization. We then expand the original low-precision network with the full-precision auxiliary gradient module to formulate a mixed-precision residual network and optimize it jointly with the low-precision model using weight sharing and separate batch normalization. This strategy ensures that the gradient back-propagates more easily, thus alleviating a major difficulty in training low-precision networks. Moreover, we find that when training a low-precision plain network with our method, the plain network can achieve performance similar to its counterpart with residual skip connections; i.e. the plain network without floating-point skip connections is just as effective to deploy at inference time. To further promote the gradient flow during backpropagation, we then employ a stochastic structured precision strategy to stochastically sample and quantize sub-networks while keeping other parts full-precision. We evaluate the proposed method on the image classification task over various quantization approaches and show consistent performance increases.
Abstract (translated)
本文试图解决低精度网络训练中的两个难题:1)由于不可微量化函数的存在,使得梯度在低精度网络中的传播变得非常困难;2)残差型网络结构中对跳接的全精度实现的要求。在训练过程中,我们引入一个辅助梯度模块来模拟跳跃连接的效果,以协助优化。然后利用全精度辅助梯度模块对原低精度网络进行扩展,形成混合精度残差网络,并结合低精度模型,采用权重分担和分批次归一化的方法对其进行优化。此策略确保梯度反向传播更容易,从而减轻了训练低精度网络的主要困难。此外,我们发现,在用我们的方法训练一个低精度的平面网络时,平面网络可以实现类似于其具有剩余跳跃连接的对应网络的性能,即没有浮点跳跃连接的平面网络在推理时也同样有效地部署。为了进一步提高反向传播过程中的梯度流,我们采用随机结构精度策略随机抽取和量化子网络,同时保持其他部分的全精度。我们评估了在不同量化方法下的图像分类任务的方法,并显示出一致的性能提高。
URL
https://arxiv.org/abs/1903.11236