Abstract
Evaluating the in-context learning classification performance of language models poses challenges due to small dataset sizes, extensive prompt-selection using the validation set, and intentionally difficult tasks that lead to near-random performance. The standard random baseline -- the expected accuracy of guessing labels uniformly at random -- is stable when the evaluation set is used only once or when the dataset is large. We account for the common practice of validation set reuse and existing small datasets with a stronger random baseline: the expected maximum accuracy across multiple random classifiers. When choosing the best prompt demonstrations across six quantized language models applied to 16 BIG-bench Lite tasks, more than 20\% of the few-shot results that exceed the standard baseline do not exceed this stronger random baseline. When held-out test sets are available, this stronger baseline is also a better predictor of held-out performance than the standard baseline, avoiding unnecessary test set evaluations. This maximum random baseline provides an easily calculated drop-in replacement for the standard baseline.
Abstract (translated)
评估语言模型的上下文学习分类性能存在挑战,由于数据集大小较小,使用了验证集进行广泛的提示选择,并且存在故意难任务,导致近随机的性能。标准的随机 baseline——猜标签的概率均匀分布的预期准确度——在仅使用一次验证集或数据集较大时是稳定的。我们考虑了验证集复用和现有小数据集的更强的随机 baseline:多个随机分类器的预期最大准确度。当选择应用于16个BIG-bench Lite任务的六个量化语言模型的最佳提示时,超过20%的超过标准基线的少样本结果没有超过这个更强的随机 baseline。当有保留测试集可用时,这个更强的基准也比标准基线更好预测保留测试的性能,避免不必要的测试集评估。这个最大随机基准提供了一个易于计算的降级替代标准基准。
URL
https://arxiv.org/abs/2404.13020