Abstract
Classification tasks are widely investigated in the In-Context Learning (ICL) paradigm. However, current efforts are evaluated on disjoint benchmarks and settings, while their performances are significantly influenced by some trivial variables, such as prompt templates, data sampling, instructions, etc., which leads to significant inconsistencies in the results reported across various literature, preventing fair comparison or meta-analysis across different papers. Therefore, this paper proposes a standardized and easy-to-use evaluation toolkit (StaICC) for in-context classification. Including, for the normal classification task, we provide StaICC-Normal, selecting 10 widely used datasets, and generating prompts with a fixed form, to mitigate the variance among the experiment implementations. To enrich the usage of our benchmark, we also provide a sub-benchmark StaICC-Diag for diagnosing ICL from several aspects, aiming for a more robust inference processing.
Abstract (translated)
在上下文学习(ICL)范式中,分类任务得到了广泛的研究。然而,当前的努力都是在相互独立的基准和设置上进行评估的,并且它们的表现受到了一些琐碎变量的影响,比如提示模板、数据采样、指令等。这导致了不同文献报告的结果存在显著不一致性,阻碍了跨论文之间的公平比较或元分析。因此,本文提出了一种标准化且易于使用的评估工具包(StaICC),用于上下文分类的评价。具体来说,对于普通的分类任务,我们提供了StaICC-Normal,选择了10个常用的基准数据集,并使用固定形式生成提示,以减少实验实现之间的差异。为了丰富我们的基准测试套件的应用范围,我们也提供了一个子基准StaICC-Diag,用于从多个角度诊断ICL,旨在促进更稳健的推理处理。
URL
https://arxiv.org/abs/2501.15708