Abstract
Effective ontology transfer has been a major goal of recent work on event argument extraction (EAE). Two methods in particular -- question answering (QA) and template infilling (TI) -- have emerged as promising approaches to this problem. However, detailed explorations of these techniques' ability to actually enable this transfer are lacking. In this work, we provide such a study, exploring zero-shot transfer using both techniques on six major EAE datasets at both the sentence and document levels. Further, we challenge the growing reliance on LLMs for zero-shot extraction, showing that vastly smaller models trained on an appropriate source ontology can yield zero-shot performance superior to that of GPT-3.5 or GPT-4.
Abstract (translated)
有效的本体转移一直是事件论证提取(EAE)领域最近工作的主要目标。尤其是问答(QA)和模板填充(TI)两种方法——被认为是解决这个问题的有前途的方法。然而,这些技术实际实现这一转移的能力的详细探讨还缺乏。在这项工作中,我们提供了这样的研究,探讨了在句子和文档级别上使用这两种技术进行零散转移。此外,我们还挑战了越来越多地依赖LLM进行零散提取的趋势,证明了在适当的本体架构上训练的小规模模型可以产生与GPT-3.5或GPT-4.0相当甚至更好的零散性能。
URL
https://arxiv.org/abs/2404.08579