Abstract
Recently, directly using large language models (LLMs) has been shown to be the most reliable method to evaluate QA models. However, it suffers from limited interpretability, high cost, and environmental harm. To address these, we propose to use soft EM with entity-driven answer set expansion. Our approach expands the gold answer set to include diverse surface forms, based on the observation that the surface forms often follow particular patterns depending on the entity type. The experimental results show that our method outperforms traditional evaluation methods by a large margin. Moreover, the reliability of our evaluation method is comparable to that of LLM-based ones, while offering the benefits of high interpretability and reduced environmental harm.
Abstract (translated)
近年来,直接使用大型语言模型(LLMs)来评估自然语言问答(QA)模型一直是最可靠的方法。然而,这种方法存在解释性有限、成本高昂和环境危害等问题。为了应对这些问题,我们提出了一种使用软EM的实体驱动答案集扩展方法。我们的方法基于观察到表面形式通常根据实体类型遵循特定模式的结论,将金答案集扩展到包括各种表面形式。实验结果表明,我们的方法在传统评估方法的基础上取得了很大的优势。此外,我们的评估方法的可靠性与基于LLM的评估方法相当,同时提供了高解释性和降低环境危害的优点。
URL
https://arxiv.org/abs/2404.15650