Abstract
On-board processing elements on UAVs are currently inadequate for training and inference of Deep Neural Networks. This is largely due to the energy consumption of memory accesses in such a network. HadaNets introduce a flexible train-from-scratch tensor quantization scheme by pairing a full precision tensor to a binary tensor in the form of a Hadamard product. Unlike wider reduced precision neural network models, we preserve the train-time parameter count, thus out-performing XNOR-Nets without a train-time memory penalty. Such training routines could see great utility in semi-supervised online learning tasks. Our method also offers advantages in model compression, as we reduce the model size of ResNet-18 by 7.43 times with respect to a full precision model without utilizing any other compression techniques. We also demonstrate a 'Hadamard Binary Matrix Multiply' kernel, which delivers a 10-fold increase in performance over full precision matrix multiplication with a similarly optimized kernel.
Abstract (translated)
目前,无人机上的机载处理元件还不足以训练和推理深层神经网络。这在很大程度上是由于这种网络内存访问的能量消耗。Hadanet通过将一个全精度张量与一个二元张量以Hadamard积的形式配对,引入了一种从零开始的柔性列张量量化方案。与更广泛的降低精度的神经网络模型不同,我们保留了列车时间参数计数,从而在没有列车时间记忆惩罚的情况下执行XNOR网络。这样的训练程序可以在半监督的在线学习任务中看到很大的效用。我们的方法在模型压缩方面也具有优势,因为我们在不使用任何其他压缩技术的情况下,将resnet-18的模型大小减少了7.43倍。我们还演示了一个“Hadamard二进制矩阵乘法”内核,它比具有类似优化内核的全精度矩阵乘法的性能提高了10倍。
URL
https://arxiv.org/abs/1905.10759