Abstract
Visual question answering (VQA) is an important and challenging multimodal task in computer vision. Recently, a few efforts have been made to bring VQA task to aerial images, due to its potential real-world applications in disaster monitoring, urban planning, and digital earth product generation. However, not only the huge variation in the appearance, scale and orientation of the concepts in aerial images, but also the scarcity of the well-annotated datasets restricts the development of VQA in this domain. In this paper, we introduce a new dataset, HRVQA, which provides collected 53512 aerial images of 1024*1024 pixels and semi-automatically generated 1070240 QA pairs. To benchmark the understanding capability of VQA models for aerial images, we evaluate the relevant methods on HRVQA. Moreover, we propose a novel model, GFTransformer, with gated attention modules and a mutual fusion module. The experiments show that the proposed dataset is quite challenging, especially the specific attribute related questions. Our method achieves superior performance in comparison to the previous state-of-the-art approaches. The dataset and the source code will be released at this https URL.
Abstract (translated)
视觉问答(VQA)是计算机视觉中一个重要的、具有挑战性的多任务。最近,有一些努力尝试将VQA任务应用于空中图像,因为它在灾难监测、城市规划和数字地球产品生成等领域的潜在实际应用场景。然而,不仅空中图像概念的外观、尺寸和方向的巨大差异,而且缺乏标注数据的短缺限制了在该领域发展VQA的能力。在本文中,我们介绍了一个新的数据集HRVQA,它提供了收集的53512张1024*1024像素的空中图像和半自动生成的1070240对QA pairs。为了比较空中图像VQA模型对图像的理解能力,我们在HRVQA上评估了相关方法。此外,我们提出了一种新的模型GFTransformer,它包含门控注意力模块和互融合模块。实验表明, proposed dataset 相当具有挑战性,特别是特定的属性相关的问题。我们的方法在与之前最先进的方法相比表现出更好的性能。数据集和源代码将在这里发布。
URL
https://arxiv.org/abs/2301.09460