Abstract
Generalizing to longer sentences is important for recent Transformer-based language models. Besides algorithms manipulating explicit position features, the success of Transformers without position encodings (NoPE) provides a new way to overcome the challenge. In this paper, we study the length generalization property of NoPE. We find that although NoPE can extend to longer sequences than the commonly used explicit position encodings, it still has a limited context length. We identify a connection between the failure of NoPE's generalization and the distraction of attention distributions. We propose a parameter-efficient tuning for searching attention heads' best temperature hyper-parameters, which substantially expands NoPE's context size. Experiments on long sequence language modeling, the synthetic passkey retrieval task and real-world long context tasks show that NoPE can achieve competitive performances with state-of-the-art length generalization algorithms. The source code is publicly accessible
Abstract (translated)
翻译:将句子扩展到较长的句子对近年来基于Transformer的语言模型非常重要。除了操纵显式位置特征的算法之外,没有位置编码的Transformer的成功提供了克服挑战的新方法。在本文中,我们研究了NoPE的长度泛化特性。我们发现,尽管NoPE可以扩展到比通常使用的显式位置编码更长序列,但它仍然有有限的上下文长度。我们发现了NoPE泛化失败和注意力分布分心之间的关系。我们提出了一个参数高效的调整来搜索注意力的最佳温度超参数,这大大扩展了NoPE的上下文大小。对于长序列语言建模、伪序列检索任务和现实世界长序列任务,实验表明NoPE可以与最先进的上下文泛化算法竞争。源代码是公开可用的。
URL
https://arxiv.org/abs/2404.12224