Abstract
In this paper, we aim at providing an introduction to the gradient descent based optimization algorithms for learning deep neural network models. Deep learning models involving multiple nonlinear projection layers are very challenging to train. Nowadays, most of the deep learning model training still relies on the back propagation algorithm actually. In back propagation, the model variables will be updated iteratively until convergence with gradient descent based optimization algorithms. Besides the conventional vanilla gradient descent algorithm, many gradient descent variants have also been proposed in recent years to improve the learning performance, including Momentum, Adagrad, Adam, Gadam, etc., which will all be introduced in this paper respectively.
Abstract (translated)
本文旨在介绍基于梯度下降的深部神经网络模型优化算法。涉及多个非线性投影层的深度学习模型的训练非常具有挑战性。目前,大多数的深度学习模型训练实际上仍然依赖于反向传播算法。在反向传播中,模型变量将被迭代更新,直到使用基于梯度下降的优化算法收敛。除了传统的普通梯度下降算法外,近年来还提出了许多梯度下降变量来提高学习性能,包括动量、Adagrad、Adam、Gadam等,本文将分别介绍这些变量。
URL
https://arxiv.org/abs/1903.03614