Paper Reading AI Learner

OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge

2019-05-31 20:29:01
Kenneth Marino, Mohammad Rastegari, Ali Farhadi, Roozbeh Mottaghi

Abstract

Visual Question Answering (VQA) in its ideal form lets us study reasoning in the joint space of vision and language and serves as a proxy for the AI task of scene understanding. However, most VQA benchmarks to date are focused on questions such as simple counting, visual attributes, and object detection that do not require reasoning or knowledge beyond what is in the image. In this paper, we address the task of knowledge-based visual question answering and provide a benchmark, called OK-VQA, where the image content is not sufficient to answer the questions, encouraging methods that rely on external knowledge resources. Our new dataset includes more than 14,000 questions that require external knowledge to answer. We show that the performance of the state-of-the-art VQA models degrades drastically in this new setting. Our analysis shows that our knowledge-based VQA task is diverse, difficult, and large compared to previous knowledge-based VQA datasets. We hope that this dataset enables researchers to open up new avenues for research in this domain. See this http URL to download and browse the dataset.

Abstract (translated)

视觉问答(vqa)以其理想的形式,使我们能够在视觉和语言的联合空间中研究推理,并作为场景理解人工智能任务的代理。然而,到目前为止,大多数VQA基准都集中在一些问题上,比如简单的计数、视觉属性和对象检测,这些问题不需要超出图像内容的推理或知识。在本文中,我们解决了基于知识的视觉问答任务,并提供了一个称为OK-VQA的基准,即图像内容不足以回答问题,鼓励了依赖外部知识资源的方法。我们的新数据集包括超过14000个需要外部知识来回答的问题。我们表明,最先进的VQA模型的性能在这个新的环境中急剧下降。我们的分析表明,与以前基于知识的VQA数据集相比,我们基于知识的VQA任务具有多样性、难度和规模。我们希望这个数据集能使研究人员为这一领域的研究开辟新的途径。请参阅此HTTP URL以下载和浏览数据集。

URL

https://arxiv.org/abs/1906.00067

PDF

https://arxiv.org/pdf/1906.00067.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot