Abstract
We propose the inverse problem of Visual question answering (iVQA), and explore its suitability as a benchmark for visuo-linguistic understanding. The iVQA task is to generate a question that corresponds to a given image and answer pair. Since the answers are less informative than the questions, and the questions have less learnable bias, an iVQA model needs to better understand the image to be successful than a VQA model. We pose question generation as a multi-modal dynamic inference process and propose an iVQA model that can gradually adjust its focus of attention guided by both a partially generated question and the answer. For evaluation, apart from existing linguistic metrics, we propose a new ranking metric. This metric compares the ground truth question's rank among a list of distractors, which allows the drawbacks of different algorithms and sources of error to be studied. Experimental results show that our model can generate diverse, grammatically correct and content correlated questions that match the given answer.
Abstract (translated)
我们提出了视觉问题回答(iVQA)的逆问题,并探讨了它作为视觉语言理解的基准的适用性。 iVQA任务是生成与给定的图像和答案对相对应的问题。由于答案比问题的信息量少,而且问题的可学习偏差较小,因此与VQA模型相比,iVQA模型需要更好地理解图像才能取得成功。我们提出问题生成为多模态动态推理过程,并提出一个iVQA模型,它可以逐渐调整由部分生成的问题和答案引导的注意力焦点。为了评估,除了现有的语言指标之外,我们提出了一个新的排名指标。这个度量标准比较了地面真值问题在干扰列表中的等级,这允许研究不同算法和误差源的缺点。实验结果表明,我们的模型可以产生符合给定答案的多样的,语法正确和内容相关的问题。
URL
https://arxiv.org/abs/1710.03370