Paper Reading AI Learner

Language Modeling through Long Term Memory Network

2019-04-18 09:19:25
Anupiya Nugaliyadde, Kok Wai Wong, Ferdous Sohel, Hong Xie

Abstract

Recurrent Neural Networks (RNN), Long Short-Term Memory Networks (LSTM), and Memory Networks which contain memory are popularly used to learn patterns in sequential data. Sequential data has long sequences that hold relationships. RNN can handle long sequences but suffers from the vanishing and exploding gradient problems. While LSTM and other memory networks address this problem, they are not capable of handling long sequences (50 or more data points long sequence patterns). Language modelling requiring learning from longer sequences are affected by the need for more information in memory. This paper introduces Long Term Memory network (LTM), which can tackle the exploding and vanishing gradient problems and handles long sequences without forgetting. LTM is designed to scale data in the memory and gives a higher weight to the input in the sequence. LTM avoid overfitting by scaling the cell state after achieving the optimal results. The LTM is tested on Penn treebank dataset, and Text8 dataset and LTM achieves test perplexities of 83 and 82 respectively. 650 LTM cells achieved a test perplexity of 67 for Penn treebank, and 600 cells achieved a test perplexity of 77 for Text8. LTM achieves state of the art results by only using ten hidden LTM cells for both datasets.

Abstract (translated)

循环神经网络(RNN)、长短期记忆网络(LSTM)和包含记忆的记忆网络普遍用于学习顺序数据中的模式。顺序数据具有保存关系的长序列。RNN可以处理长序列,但会遇到消失和爆炸梯度问题。尽管LSTM和其他内存网络解决了这个问题,但它们不能处理长序列(50个或更多数据点长序列模式)。需要从较长的序列中学习的语言建模受对内存中更多信息的需要的影响。本文介绍了一种长期记忆网络(LTM),它可以解决爆炸和消失梯度问题,处理长序列而不会忘记。LTM设计用于在内存中缩放数据,并为序列中的输入赋予更高的权重。LTM通过在获得最佳结果后缩放单元状态来避免过度拟合。LTM在Penn Treebank数据集上进行了测试,text8数据集和LTM分别达到了83和82的测试复杂度。650个LTM细胞的Penn Treebank测试复杂度为67,600个细胞的text8测试复杂度为77。LTM通过对两个数据集只使用十个隐藏的LTM单元来实现最先进的结果。

URL

https://arxiv.org/abs/1904.08936

PDF

https://arxiv.org/pdf/1904.08936.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot