Abstract
Large Language Models (LLMs), despite their impressive performance on a wide range of tasks, require significant GPU memory and consume substantial computational resources. In addition to model weights, the memory occupied by KV cache increases linearly with sequence length, becoming a main bottleneck for inference. In this paper, we introduce a novel approach for optimizing the KV cache which significantly reduces its memory footprint. Through a comprehensive investigation, we find that on LLaMA2 series models, (i) the similarity between adjacent tokens' query vectors is remarkably high, and (ii) current query's attention calculation can rely solely on the attention information of a small portion of the preceding queries. Based on these observations, we propose CORM, a KV cache eviction policy that dynamically retains important key-value pairs for inference without finetuning the model. We validate that CORM reduces the inference memory usage of KV cache by up to 70% without noticeable performance degradation across six tasks in LongBench.
Abstract (translated)
大语言模型(LLMs)虽然在各种任务上的表现令人印象深刻,但需要大量的GPU内存,并且消耗大量的计算资源。除了模型权重外,KV缓存所占的内存随序列长度的增加而线性增加,成为推理的主要瓶颈。在本文中,我们提出了一种新的优化KV缓存的策略,显著减少了其内存足迹。通过全面的调查,我们发现,在LLaMA2系列模型中,(i)相邻词查询向量之间的相似性非常高,并且(ii)当前查询的注意力计算仅依赖于前几个查询的注意力信息。基于这些观察结果,我们提出了CORM,一种用于保留用于推理的重要键值对的KV缓存淘汰策略,而无需对模型进行微调。我们验证,CORM在LongBench中的六个任务上,将KV缓存的推理内存使用量降低至最多70%,且没有显式的性能下降。
URL
https://arxiv.org/abs/2404.15949