Abstract
Recurrent Neural Networks (RNN), Long Short-Term Memory Networks (LSTM), and Memory Networks which contain memory are popularly used to learn patterns in sequential data. Sequential data has long sequences that hold relationships. RNN can handle long sequences but suffers from the vanishing and exploding gradient problems. While LSTM and other memory networks address this problem, they are not capable of handling long sequences (50 or more data points long sequence patterns). Language modelling requiring learning from longer sequences are affected by the need for more information in memory. This paper introduces Long Term Memory network (LTM), which can tackle the exploding and vanishing gradient problems and handles long sequences without forgetting. LTM is designed to scale data in the memory and gives a higher weight to the input in the sequence. LTM avoid overfitting by scaling the cell state after achieving the optimal results. The LTM is tested on Penn treebank dataset, and Text8 dataset and LTM achieves test perplexities of 83 and 82 respectively. 650 LTM cells achieved a test perplexity of 67 for Penn treebank, and 600 cells achieved a test perplexity of 77 for Text8. LTM achieves state of the art results by only using ten hidden LTM cells for both datasets.
Abstract (translated)
循环神经网络(RNN)、长短期记忆网络(LSTM)和包含记忆的记忆网络普遍用于学习顺序数据中的模式。顺序数据具有保存关系的长序列。RNN可以处理长序列,但会遇到消失和爆炸梯度问题。尽管LSTM和其他内存网络解决了这个问题,但它们不能处理长序列(50个或更多数据点长序列模式)。需要从较长的序列中学习的语言建模受对内存中更多信息的需要的影响。本文介绍了一种长期记忆网络(LTM),它可以解决爆炸和消失梯度问题,处理长序列而不会忘记。LTM设计用于在内存中缩放数据,并为序列中的输入赋予更高的权重。LTM通过在获得最佳结果后缩放单元状态来避免过度拟合。LTM在Penn Treebank数据集上进行了测试,text8数据集和LTM分别达到了83和82的测试复杂度。650个LTM细胞的Penn Treebank测试复杂度为67,600个细胞的text8测试复杂度为77。LTM通过对两个数据集只使用十个隐藏的LTM单元来实现最先进的结果。
URL
https://arxiv.org/abs/1904.08936