Abstract
Modern Large Language Models (LLMs) have showcased remarkable prowess in various tasks necessitating sophisticated cognitive behaviors. Nevertheless, a paradoxical performance discrepancy is observed, where these models underperform in seemingly elementary tasks like relation extraction and event extraction due to two issues in conventional evaluation. (1) The imprecision of existing evaluation metrics that struggle to effectively gauge semantic consistency between model outputs and ground truth, and (2) The inherent incompleteness of evaluation benchmarks, primarily due to restrictive human annotation schemas, resulting in underestimated LLM performances. Inspired by the principles in subjective question correction, we propose a new evaluation method, SQC-Score. This method innovatively utilizes LLMs, fine-tuned through subjective question correction data, to refine matching between model outputs and golden labels. Additionally, by incorporating a Natural Language Inference (NLI) model, SQC-Score enriches golden labels, addressing benchmark incompleteness by acknowledging correct yet previously omitted answers. Results on three information extraction tasks show that SQC-Score is more preferred by human annotators than the baseline metrics. Utilizing SQC-Score, we conduct a comprehensive evaluation of the state-of-the-art LLMs and provide insights for future research for information extraction. Dataset and associated codes can be accessed at this https URL.
Abstract (translated)
现代大型语言模型(LLMs)在各种任务上表现出了非凡的才能,这些任务需要复杂的认知行为。然而,观察到的一个悖论是,这些模型在似乎简单的任务(如关系提取和事件提取)上表现不佳,因为传统评估方法存在两个问题:(1)现有评估指标对衡量模型输出和真实值之间语义一致性的精确度较低;(2)评估基准本身就不完备,主要原因是受限制的人类标注模式,导致LLM性能被低估。受到主观问题纠正原则的启发,我们提出了一个新的评估方法:SQC-Score。这种方法通过主观问题纠正数据微调LLM,从而改善模型输出与真实标签之间的匹配。此外,通过引入自然语言推理(NLI)模型,SQC-Score 丰富了对已有黄金标签的评估,从而解决基准的不完备性,并承认正确但之前被忽略的答案。在三个信息提取任务上的结果表明,SQC-Score 更受人类标注者喜爱,其性能优于基线指标。利用 SQC-Score,我们对最先进的LLM进行了全面评估,为信息提取的未来研究提供了启示。数据集和相关信息代码可在此链接访问:https://url.cn/
URL
https://arxiv.org/abs/2404.03532