Abstract
Large language models (LLMs) have proven to be remarkably efficient, both across a wide range of natural language processing tasks and well beyond them. However, a comprehensive theoretical analysis of the origins of their impressive performance remains elusive. In this paper, we approach this challenging task by drawing an equivalence between generic autoregressive language models with vocabulary of size $T$ and context window of size $K$ and Markov chains defined on a finite state space of size $\mathcal{O}(T^K)$. We derive several surprising findings related to the existence of a stationary distribution of Markov chains that capture the inference power of LLMs, their speed of convergence to it, and the influence of the temperature on the latter. We then prove pre-training and in-context generalization bounds and show how the drawn equivalence allows us to enrich their interpretation. Finally, we illustrate our theoretical guarantees with experiments on several recent LLMs to highlight how they capture the behavior observed in practice.
Abstract (translated)
大语言模型(LLMs)在自然语言处理任务中的表现非常高效,而且远远超过了这一点。然而,对它们出色表现背后的全面理论分析仍然知之甚少。在本文中,我们通过将大小为$T$的词汇表和大小为$\mathcal{O}(T^K)$的有限状态空间定义的 Markov 链与具有大小为$K$的上下文窗口等价来解决这个问题。我们发现了关于 Markov 链存在静止分布的一些令人惊讶的发现,它们揭示了 LLMs 的推理能力、它们到达静止分布的速度以及温度对其的影响。然后我们证明了预训练和上下文泛化 bound,并表明这种类比使我们对它们的理解更加丰富。最后,我们通过实验展示了我们理论保证的正确性,以突显它们在实践中捕捉到的行为。
URL
https://arxiv.org/abs/2410.02724