Abstract
Current visual question answering datasets do not consider the rich semantic information conveyed by text within an image. In this work, we present a new dataset, ST-VQA, that aims to highlight the importance of exploiting high-level semantic information present in images as textual cues in the VQA process. We use this dataset to define a series of tasks of increasing difficulty for which reading the scene text in the context provided by the visual information is necessary to reason and generate an appropriate answer. We propose a new evaluation metric for these tasks to account both for reasoning errors as well as shortcomings of the text recognition module. In addition we put forward a series of baseline methods, which provide further insight to the newly released dataset, and set the scene for further research.
Abstract (translated)
当前的视觉问答数据集不考虑图像中文本所传达的丰富语义信息。在这项工作中,我们提出了一个新的数据集ST-VQA,旨在强调利用图像中的高级语义信息作为文本线索在VQA过程中的重要性。我们使用这个数据集来定义一系列难度越来越大的任务,对于这些任务,在视觉信息提供的上下文中读取场景文本是必要的,以便进行推理并生成适当的答案。我们提出了一个新的评估指标,以解决这些任务的推理错误以及文本识别模块的缺点。此外,我们还提出了一系列基线方法,为新发布的数据集提供了进一步的洞察,并为进一步的研究奠定了基础。
URL
https://arxiv.org/abs/1905.13648