Abstract
We introduce Holmes, a benchmark to assess the linguistic competence of language models (LMs) - their ability to grasp linguistic phenomena. Unlike prior prompting-based evaluations, Holmes assesses the linguistic competence of LMs via their internal representations using classifier-based probing. In doing so, we disentangle specific phenomena (e.g., part-of-speech of words) from other cognitive abilities, like following textual instructions, and meet recent calls to assess LMs' linguistic competence in isolation. Composing Holmes, we review over 250 probing studies and feature more than 200 datasets to assess syntax, morphology, semantics, reasoning, and discourse phenomena. Analyzing over 50 LMs reveals that, aligned with known trends, their linguistic competence correlates with model size. However, surprisingly, model architecture and instruction tuning also significantly influence performance, particularly in morphology and syntax. Finally, we propose FlashHolmes, a streamlined version of Holmes designed to lower the high computation load while maintaining high-ranking precision.
Abstract (translated)
我们介绍Holmes,作为一种评估语言模型(LMs)语言能力的基准,衡量它们把握语言现象的能力。与先前的提示评估方法不同,Holmes使用分类器基于探测的方法评估LMs的语言能力。这样做,我们区分了具体现象(如词的词性)与其他认知能力(如遵循文本指令)并满足了对LMs语言能力的单独评估。组成Holmes后,我们审查了超过250个探测研究,并将超过200个数据集用于评估语法、语义、推理和会话现象。分析超过50个LMs后,发现,与已知趋势一致,它们的Linguistic competence与模型大小呈正相关。然而,令人惊讶的是,模型结构和指令调整也会显著影响性能,特别是在语义和语法方面。最后,我们提出了FlashHolmes,一种简化版的Holmes,旨在降低计算负担,同时保持高排名精度。
URL
https://arxiv.org/abs/2404.18923