Abstract
Multimodal relation extraction (MRE) is the task of identifying the semantic relationships between two entities based on the context of the sentence image pair. Existing retrieval-augmented approaches mainly focused on modeling the retrieved textual knowledge, but this may not be able to accurately identify complex relations. To improve the prediction, this research proposes to retrieve textual and visual evidence based on the object, sentence, and whole image. We further develop a novel approach to synthesize the object-level, image-level, and sentence-level information for better reasoning between the same and different modalities. Extensive experiments and analyses show that the proposed method is able to effectively select and compare evidence across modalities and significantly outperforms state-of-the-art models.
Abstract (translated)
多模态关系提取(MRE)的任务是根据句子图像 pair 上下文确定两个实体之间的语义关系。现有的检索增强方法主要关注 Modeling 检索到的文字知识,但这可能无法准确识别复杂的关系。为了改善预测,本研究提议基于对象、句子和整个图像检索文字和视觉证据。我们进一步开发了一种新的方法来合成对象级、图像级和句子级信息,以更好地在不同模态之间进行推理。广泛的实验和分析表明, proposed 方法能够有效地选择和比较不同模态的证据,并显著优于最先进的模型。
URL
https://arxiv.org/abs/2305.16166