Abstract
In-context learning (ICL) approaches typically leverage prompting to condition decoder-only language model generation on reference information. Just-in-time processing of a context is inefficient due to the quadratic cost of self-attention operations, and caching is desirable. However, caching transformer states can easily require almost as much space as the model parameters. When the right context isn't known in advance, caching ICL can be challenging. This work addresses these limitations by introducing models that, inspired by the encoder-decoder architecture, use cross-attention to condition generation on reference text without the prompt. More precisely, we leverage pre-trained decoder-only models and only train a small number of added layers. We use Question-Answering (QA) as a testbed to evaluate the ability of our models to perform conditional generation and observe that they outperform ICL, are comparable to fine-tuned prompted LLMs, and drastically reduce the space footprint relative to standard KV caching by two orders of magnitude.
Abstract (translated)
上下文学习(ICL)方法通常利用提示来条件解码器-仅语言模型生成,基于参考信息。然而,由于自注意操作的二次成本,即时处理上下文会变得低效,而缓存是可取的。然而,缓存变压器状态可能需要几乎与模型参数相同的空间。当不知道预先确定的正确上下文时,缓存ICL可能会具有挑战性。本文通过引入具有编码器-解码器架构灵感的消息,来解决这些限制。更具体地说,我们利用预训练的解码器-仅模型,并只训练了很少的附加层。我们使用问答(QA)作为测试平台来评估我们的模型的条件生成能力,观察到它们的表现优于ICL,与 fine-tuned 提示的 LLM 相当,并且相对标准 KV 缓存,空间足迹减少了两 orders of magnitude。
URL
https://arxiv.org/abs/2404.15420