Abstract
Narrative reasoning relies on the understanding of eventualities in story contexts, which requires a wealth of background world knowledge. To help machines leverage such knowledge, existing solutions can be categorized into two groups. Some focus on implicitly modeling eventuality knowledge by pretraining language models (LMs) with eventuality-aware objectives. However, this approach breaks down knowledge structures and lacks interpretability. Others explicitly collect world knowledge of eventualities into structured eventuality-centric knowledge graphs (KGs). However, existing research on leveraging these knowledge sources for free-texts is limited. In this work, we propose an initial comprehensive framework called EventGround, which aims to tackle the problem of grounding free-texts to eventuality-centric KGs for contextualized narrative reasoning. We identify two critical problems in this direction: the event representation and sparsity problems. We provide simple yet effective parsing and partial information extraction methods to tackle these problems. Experimental results demonstrate that our approach consistently outperforms baseline models when combined with graph neural network (GNN) or large language model (LLM) based graph reasoning models. Our framework, incorporating grounded knowledge, achieves state-of-the-art performance while providing interpretable evidence.
Abstract (translated)
叙述性推理依赖于故事上下文中的可能性理解,这需要丰富的背景世界知识。为了帮助机器利用这些知识,现有的解决方案可以分为两组。一些关注通过预训练语言模型(LMs)显式建模可能性知识,但这种方法破坏了知识结构并缺乏可解释性。其他人则明确收集故事上下文中的可能性世界的知识,并将其组织成结构化的可能性中心知识图(KG)。然而,关于如何利用这些知识资源进行自由文本的推理研究仍然有限。在这项工作中,我们提出了一个名为EventGround的初始全面框架,旨在解决将自由文本与事件主义中心知识图(KG)结合的问题。我们指出了这一方向的两个关键问题:事件表示和稀疏性问题。我们提供了简单而有效的解析和部分信息提取方法来解决这些问题。实验结果表明,当与图神经网络(GNN)或大型语言模型(LLM)基于图推理模型相结合时,我们的方法始终优于基线模型。我们的框架结合了 grounded knowledge,在提供可解释证据的同时实现了最先进的性能。
URL
https://arxiv.org/abs/2404.00209