Abstract
How can local-search methods such as stochastic gradient descent (SGD) avoid bad local minima in training multi-layer neural networks? Why can they fit random labels even given non-convex and non-smooth architectures? Most existing theory only covers networks with one hidden layer, so can we go deeper? In this paper, we focus on recurrent neural networks (RNNs) which are multi-layer networks widely used in natural language processing. They are harder to analyze than feedforward neural networks, because the $\textit{same}$ recurrent unit is repeatedly applied across the entire time horizon of length $L$, which is analogous to feedforward networks of depth $L$. We show when the number of neurons is sufficiently large, meaning polynomial in the training data size and in $L$, then SGD is capable of minimizing the regression loss in the linear convergence rate. This gives theoretical evidence of how RNNs can memorize data. More importantly, in this paper we build general toolkits to analyze multi-layer networks with ReLU activations. For instance, we prove why ReLU activations can prevent exponential gradient explosion or vanishing, and build a perturbation theory to analyze first-order approximation of multi-layer networks.
Abstract (translated)
在训练多层神经网络时,随机梯度下降(SGD)等局部搜索方法如何避免坏的局部极小?为什么它们能适合随机标签,即使给定非凸和非光滑的架构?大多数现有的理论只涉及一个隐藏层的网络,那么我们能深入吗?本文主要研究递归神经网络(RNN),它是一种广泛应用于自然语言处理的多层网络。它们比前馈神经网络更难分析,因为$ extit相同的$重复单元在长度为$L$的整个时间范围内重复应用,这类似于深度为$L$的前馈网络。当神经元数目足够大时,也就是训练数据大小的多项式和$L$时,sgd能够将线性收敛速度中的回归损失最小化。这为RNN如何记忆数据提供了理论依据。更重要的是,在本文中,我们构建了通用工具包来分析具有relu激活的多层网络。例如,我们证明了RELU激活能防止指数梯度爆炸或消失的原因,并建立了一个微扰理论来分析多层网络的一阶近似。
URL
https://arxiv.org/abs/1810.12065