Paper Reading AI Learner

On the Convergence Rate of Training Recurrent Neural Networks

2019-05-27 10:08:59
Zeyuan Allen-Zhu, Yuanzhi Li, Zhao Song

Abstract

How can local-search methods such as stochastic gradient descent (SGD) avoid bad local minima in training multi-layer neural networks? Why can they fit random labels even given non-convex and non-smooth architectures? Most existing theory only covers networks with one hidden layer, so can we go deeper? In this paper, we focus on recurrent neural networks (RNNs) which are multi-layer networks widely used in natural language processing. They are harder to analyze than feedforward neural networks, because the $\textit{same}$ recurrent unit is repeatedly applied across the entire time horizon of length $L$, which is analogous to feedforward networks of depth $L$. We show when the number of neurons is sufficiently large, meaning polynomial in the training data size and in $L$, then SGD is capable of minimizing the regression loss in the linear convergence rate. This gives theoretical evidence of how RNNs can memorize data. More importantly, in this paper we build general toolkits to analyze multi-layer networks with ReLU activations. For instance, we prove why ReLU activations can prevent exponential gradient explosion or vanishing, and build a perturbation theory to analyze first-order approximation of multi-layer networks.

Abstract (translated)

在训练多层神经网络时,随机梯度下降(SGD)等局部搜索方法如何避免坏的局部极小?为什么它们能适合随机标签,即使给定非凸和非光滑的架构?大多数现有的理论只涉及一个隐藏层的网络,那么我们能深入吗?本文主要研究递归神经网络(RNN),它是一种广泛应用于自然语言处理的多层网络。它们比前馈神经网络更难分析,因为$ extit相同的$重复单元在长度为$L$的整个时间范围内重复应用,这类似于深度为$L$的前馈网络。当神经元数目足够大时,也就是训练数据大小的多项式和$L$时,sgd能够将线性收敛速度中的回归损失最小化。这为RNN如何记忆数据提供了理论依据。更重要的是,在本文中,我们构建了通用工具包来分析具有relu激活的多层网络。例如,我们证明了RELU激活能防止指数梯度爆炸或消失的原因,并建立了一个微扰理论来分析多层网络的一阶近似。

URL

https://arxiv.org/abs/1810.12065

PDF

https://arxiv.org/pdf/1810.12065.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot