Abstract
Visual Question Answering (VQA) in its ideal form lets us study reasoning in the joint space of vision and language and serves as a proxy for the AI task of scene understanding. However, most VQA benchmarks to date are focused on questions such as simple counting, visual attributes, and object detection that do not require reasoning or knowledge beyond what is in the image. In this paper, we address the task of knowledge-based visual question answering and provide a benchmark, called OK-VQA, where the image content is not sufficient to answer the questions, encouraging methods that rely on external knowledge resources. Our new dataset includes more than 14,000 questions that require external knowledge to answer. We show that the performance of the state-of-the-art VQA models degrades drastically in this new setting. Our analysis shows that our knowledge-based VQA task is diverse, difficult, and large compared to previous knowledge-based VQA datasets. We hope that this dataset enables researchers to open up new avenues for research in this domain. See this http URL to download and browse the dataset.
Abstract (translated)
视觉问答(vqa)以其理想的形式,使我们能够在视觉和语言的联合空间中研究推理,并作为场景理解人工智能任务的代理。然而,到目前为止,大多数VQA基准都集中在一些问题上,比如简单的计数、视觉属性和对象检测,这些问题不需要超出图像内容的推理或知识。在本文中,我们解决了基于知识的视觉问答任务,并提供了一个称为OK-VQA的基准,即图像内容不足以回答问题,鼓励了依赖外部知识资源的方法。我们的新数据集包括超过14000个需要外部知识来回答的问题。我们表明,最先进的VQA模型的性能在这个新的环境中急剧下降。我们的分析表明,与以前基于知识的VQA数据集相比,我们基于知识的VQA任务具有多样性、难度和规模。我们希望这个数据集能使研究人员为这一领域的研究开辟新的途径。请参阅此HTTP URL以下载和浏览数据集。
URL
https://arxiv.org/abs/1906.00067