Abstract
Large language models (LLMs) excel at few-shot in-context learning (ICL) -- learning from a few examples provided in context at inference, without any weight updates. Newly expanded context windows allow us to investigate ICL with hundreds or thousands of examples -- the many-shot regime. Going from few-shot to many-shot, we observe significant performance gains across a wide variety of generative and discriminative tasks. While promising, many-shot ICL can be bottlenecked by the available amount of human-generated examples. To mitigate this limitation, we explore two new settings: Reinforced and Unsupervised ICL. Reinforced ICL uses model-generated chain-of-thought rationales in place of human examples. Unsupervised ICL removes rationales from the prompt altogether, and prompts the model only with domain-specific questions. We find that both Reinforced and Unsupervised ICL can be quite effective in the many-shot regime, particularly on complex reasoning tasks. Finally, we demonstrate that, unlike few-shot learning, many-shot learning is effective at overriding pretraining biases and can learn high-dimensional functions with numerical inputs. Our analysis also reveals the limitations of next-token prediction loss as an indicator of downstream ICL performance.
Abstract (translated)
大语言模型(LLMs)在几轮上下文理解学习(ICL)方面表现出色——从上下文提供的少量示例中进行推理,而无需进行权重更新。扩展的上下文窗口允许我们研究 ICL 的多轮范式——许多轮范式。从几轮到多轮,我们在各种生成和判别任务中观察到显著的性能提升。虽然具有前景,但许多轮 ICL 可能受到可用的人类示例数量的限制。为了减轻这一限制,我们探讨了两个新的设置:强化和无监督 ICL。强化 ICL 使用模型生成的推理线索代替人类示例。无监督 ICL 完全移除了提示,并仅向模型提出领域特定问题。我们发现,两者在多轮范式下都可以取得相当有效的效果,特别是在复杂推理任务中。最后,我们证明了,与 few-shot 学习不同,许多轮学习可以有效地抵消预训练偏见,并能够学习具有数值输入的高维函数。我们的分析还揭示了下一词预测损失作为下游 ICL 表现指标的局限性。
URL
https://arxiv.org/abs/2404.11018