Abstract
Attention mechanisms have revolutionized sequence learning but suffer from quadratic computational complexity. This paper introduces Lattice, a novel recurrent neural network (RNN) mechanism that leverages the inherent low-rank structure of K-V matrices to efficiently compress the cache into a fixed number of memory slots, achieving sub-quadratic complexity. We formulate this compression as an online optimization problem and derive a dynamic memory update rule based on a single gradient descent step. The resulting recurrence features a state- and input-dependent gating mechanism, offering an interpretable memory update process. The core innovation is the orthogonal update: each memory slot is updated exclusively with information orthogonal to its current state hence incorporation of only novel, non-redundant data, which minimizes the interference with previously stored information. The experimental results show that Lattice achieves the best perplexity compared to all baselines across diverse context lengths, with performance improvement becoming more pronounced as the context length increases.
Abstract (translated)
注意力机制已彻底改变了序列学习,但其计算复杂度呈二次方增长。本文介绍了一种名为Lattice的新颖递归神经网络(RNN)机制,该机制利用K-V矩阵的固有低秩结构来高效地将缓存压缩为固定数量的记忆槽,从而实现次线性复杂度。我们将这种压缩形式化为一个在线优化问题,并基于单步梯度下降推导出一种动态记忆更新规则。由此产生的递归特征包含了一个依赖于状态和输入的门控机制,提供了可解释的记忆更新过程。 Lattice的核心创新在于正交更新:每个记忆槽仅通过与其当前状态正交的信息进行更新,从而只吸收新颖且非冗余的数据,最大限度地减少对先前存储信息的干扰。实验结果表明,无论上下文长度如何,Lattice在所有基准测试中都达到了最佳困惑度(perplexity),并且随着上下文长度的增加,其性能改进愈发明显。
URL
https://arxiv.org/abs/2504.05646